1 code implementation • 14 Feb 2024 • Jiwon Song, Kyungseok Oh, Taesu Kim, HyungJun Kim, Yulhwa Kim, Jae-Joon Kim
In this paper, we introduce SLEB, a novel approach designed to streamline LLMs by eliminating redundant transformer blocks.
no code implementations • 7 Feb 2024 • Hyesung Jeon, Yulhwa Kim, Jae-Joon Kim
In resource-constrained scenarios, PTQ, with its reduced training overhead, is often preferred over QAT, despite the latter's potential for higher accuracy.
no code implementations • 3 Jul 2023 • Jiwoong Choi, Minkyu Kim, Daehyun Ahn, Taesu Kim, Yulhwa Kim, Dongwon Jo, Hyesung Jeon, Jae-Joon Kim, HyungJun Kim
The emergence of diffusion models has greatly broadened the scope of high-fidelity image synthesis, resulting in notable advancements in both practical implementation and academic research.
no code implementations • 23 Mar 2019 • Hyungjun Kim, Yulhwa Kim, Sungju Ryu, Jae-Joon Kim
We demonstrate that the BitSplit version of LeNet-5, VGG-9, AlexNet, and ResNet-18 can be trained to have similar classification accuracy at a lower computational cost compared to conventional multi-bit networks with low bit precision (<= 4-bit).
1 code implementation • 6 Nov 2018 • Yulhwa Kim, HyungJun Kim, Jae-Joon Kim
Recently, RRAM-based Binary Neural Network (BNN) hardware has been gaining interests as it requires 1-bit sense-amp only and eliminates the need for high-resolution ADC and DAC.