1 code implementation • 6 Oct 2023 • Changhun Lee, Chiehyeon Lim
We study the theoretical aspects of Reinforced Language Models (RLMs) from a bi-objective optimization perspective.
2 code implementations • 4 Jun 2023 • Changhun Lee, Jungyu Jin, Taesu Kim, HyungJun Kim, Eunhyeok Park
Large language models (LLMs) with hundreds of billions of parameters require powerful server-grade GPUs for inference, limiting their practical deployment.
no code implementations • ICCV 2023 • Changhun Lee, HyungJun Kim, Eunhyeok Park, Jae-Joon Kim
Binary Neural Networks (BNNs) have emerged as a promising solution for reducing the memory footprint and compute costs of deep neural networks, but they suffer from quality degradation due to the lack of freedom as activations and weights are constrained to the binary values.
no code implementations • CVPR 2021 • HyungJun Kim, Jihoon Park, Changhun Lee, Jae-Joon Kim
We also show that adjusting the threshold values of binary activation functions results in the unbalanced distribution of the binary activation, which increases the accuracy of BNN models.