Search Results for author: Junhan Kim

Found 4 papers, 0 papers with code

Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers

no code implementations14 Feb 2024 Junhan Kim, Kyungphil Park, Chungman Lee, Ho-young Kim, Joonyoung Kim, Yongkweon Jeon

Through extensive experiments on various language models and complexity analysis, we demonstrate that aespa is accurate and efficient in quantizing Transformer models.

Quantization

Vision Transformer-based Feature Extraction for Generalized Zero-Shot Learning

no code implementations2 Feb 2023 Jiseob Kim, Kyuhong Shim, Junhan Kim, Byonghyo Shim

In AAM, the correlation between each patch feature and the synthetic image attribute is used as the importance weight for each patch.

Attribute Generalized Zero-Shot Learning

Semantic Feature Extraction for Generalized Zero-shot Learning

no code implementations29 Dec 2021 Junhan Kim, Kyuhong Shim, Byonghyo Shim

Key idea of the proposed approach, henceforth referred to as semantic feature extraction-based GZSL (SE-GZSL), is to use the semantic feature containing only attribute-related information in learning the relationship between the image and the attribute.

Attribute Generalized Zero-Shot Learning

Gradual Federated Learning with Simulated Annealing

no code implementations11 Oct 2021 Luong Trung Nguyen, Junhan Kim, Byonghyo Shim

Federated averaging (FedAvg) is a popular federated learning (FL) technique that updates the global model by averaging local models and then transmits the updated global model to devices for their local model update.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.