1 code implementation • 16 Feb 2024 • Yeonhong Park, Jake Hyun, SangLyul Cho, Bonggeun Sim, Jae W. Lee
Recently, considerable efforts have been directed towards compressing Large Language Models (LLMs), which showcase groundbreaking capabilities across diverse applications but entail significant deployment costs due to their large sizes.
1 code implementation • 19 Aug 2022 • Yeonhong Park, Sunhong Min, Jae W. Lee
Thus, we propose Ginex, the first SSD-based GNN training system that can process billion-scale graph datasets on a single machine.
1 code implementation • 18 Aug 2022 • Jonghyun Bae, Woohyeon Baek, Tae Jun Ham, Jae W. Lee
The decoding process of L3 is effectively parallelized on the accelerator, thus minimizing CPU intervention for data preparation during DNN training.
no code implementations • 22 Feb 2020 • Tae Jun Ham, Sung Jun Jung, Seonghak Kim, Young H. Oh, Yeonhong Park, Yoonho Song, Jung-Hun Park, Sanghee Lee, Kyoung Park, Jae W. Lee, Deog-Kyoon Jeong
The attention mechanism is widely adopted by many state-of-the-art neural networks for computer vision, natural language processing, and machine translation, and accounts for a large portion of total execution time.