1 code implementation • 6 Dec 2023 • Eojin Jeon, Mingyu Lee, Juhyeong Park, Yeachan Kim, Wing-Lam Mok, SangKeun Lee
To mitigate the detrimental effect of the bias on the networks, previous works have proposed debiasing methods that down-weight the biased examples identified by an auxiliary model, which is trained with explicit bias labels.
no code implementations • 13 Sep 2023 • Yeachan Kim, Bonggun Shin
In this work, we carefully analyze the existing methods in heterogeneous environments.
1 code implementation • 17 Mar 2023 • Jun-Hyung Park, Yeachan Kim, Junho Kim, Joon-Young Choi, SangKeun Lee
In this work, we introduce a novel structure pruning method, termed as dynamic structure pruning, to identify optimal pruning granularities for intra-channel pruning.
no code implementations • 12 Jan 2023 • Yeachan Kim, Seongyeon Kim, Ihyeok Seo, Bonggun Shin
Comprehensive results show that PhaseAT significantly improves the convergence for high-frequency information.
no code implementations • 10 Jun 2022 • Yeachan Kim, Bonggun Shin
The strategy is to estimate the density of the unlabeled samples and select diverse samples mainly from sparse regions.
no code implementations • LREC 2022 • Do-Myoung Lee, Yeachan Kim, Chang-gyun Seo
In this paper, we propose context-based virtual adversarial training (ConVAT) to prevent a text classifier from overfitting to noisy labels.
no code implementations • 17 Sep 2021 • Yeachan Kim, Bonggun Shin
In silico prediction of drug-target interactions (DTI) is significant for drug discovery because it can largely reduce timelines and costs in the drug development process.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Kang-Min Kim, Bumsu Hyeon, Yeachan Kim, Jun-Hyung Park, SangKeun Lee
In addition, we propose a weakly supervised pretraining, where labels for text classification are obtained automatically from an existing approach.
no code implementations • ACL 2020 • Yeachan Kim, Kang-Min Kim, SangKeun Lee
However, unlike prior works that assign the same length of codes to all words, we adaptively assign different lengths of codes to each word by learning downstream tasks.
no code implementations • LREC 2020 • Yeachan Kim, Kang-Min Kim, SangKeun Lee
In the first stage, we learn subword embeddings from the pre-trained word embeddings by using an additive composition function of subwords.
no code implementations • COLING 2018 • Yeachan Kim, Kang-Min Kim, Ji-Min Lee, SangKeun Lee
Unlike previous models that learn word representations from a large corpus, we take a set of pre-trained word embeddings and generalize it to word entries, including OOV words.