Search Results for author: Changhun Lee

Found 4 papers, 2 papers with code

Reward Dropout Improves Control: Bi-objective Perspective on Reinforced LM

1 code implementation6 Oct 2023 Changhun Lee, Chiehyeon Lim

We study the theoretical aspects of Reinforced Language Models (RLMs) from a bi-objective optimization perspective.

OWQ: Outlier-Aware Weight Quantization for Efficient Fine-Tuning and Inference of Large Language Models

2 code implementations4 Jun 2023 Changhun Lee, Jungyu Jin, Taesu Kim, HyungJun Kim, Eunhyeok Park

Large language models (LLMs) with hundreds of billions of parameters require powerful server-grade GPUs for inference, limiting their practical deployment.

Quantization

INSTA-BNN: Binary Neural Network with INSTAnce-aware Threshold

no code implementations ICCV 2023 Changhun Lee, HyungJun Kim, Eunhyeok Park, Jae-Joon Kim

Binary Neural Networks (BNNs) have emerged as a promising solution for reducing the memory footprint and compute costs of deep neural networks, but they suffer from quality degradation due to the lack of freedom as activations and weights are constrained to the binary values.

Quantization

Improving Accuracy of Binary Neural Networks using Unbalanced Activation Distribution

no code implementations CVPR 2021 HyungJun Kim, Jihoon Park, Changhun Lee, Jae-Joon Kim

We also show that adjusting the threshold values of binary activation functions results in the unbalanced distribution of the binary activation, which increases the accuracy of BNN models.

Binarization

Cannot find the paper you are looking for? You can Submit a new open access paper.