no code implementations • 27 Jan 2024 • Seungcheol Park, Jaehyeon Choi, Sojin Lee, U Kang
How can we compress language models without sacrificing accuracy?
1 code implementation • 7 Aug 2023 • Seungcheol Park, Hojun Choi, U Kang
As a result, K-prune shows significant accuracy improvements up to 58. 02%p higher F1 score compared to existing retraining-free pruning algorithms under a high compression rate of 80% on the SQuAD benchmark without any retraining process.
no code implementations • 23 Dec 2019 • Seungcheol Park, Huiwen Xu, Taehun Kim, Inhwan Hwang, Kyung-Jun Kim, U Kang
We address the problem of measuring transferability between source and target datasets, where the source and the target have different feature spaces and distributions.