no code implementations • NAACL 2022 • Juhyuk Lee, Min-Joong Lee, June Yong Yang, Eunho Yang
To keep a knowledge graph up-to-date, an extractor needs not only the ability to recall the triples it encountered during training, but also the ability to extract the new triples from the context that it has never seen before.
no code implementations • 28 Feb 2024 • June Yong Yang, Byeongwook Kim, Jeongin Bae, Beomseok Kwon, Gunho Park, Eunho Yang, Se Jung Kwon, Dongsoo Lee
Key-Value (KV) Caching has become an essential technique for accelerating the inference speed and throughput of generative Large Language Models~(LLMs).
no code implementations • 16 Dec 2021 • Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang
However, they now suffer from lack of sample diversification as they always deterministically select regions with maximum saliency, injecting bias into the augmented data.
no code implementations • 2 Dec 2021 • Yeonsung Jung, Hajin Shim, June Yong Yang, Eunho Yang
Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, often rely heavily on malignant bias as shortcuts instead of task-related information for discriminative tasks.
no code implementations • 29 Sep 2021 • Juhyuk Lee, Min-Joong Lee, June Yong Yang, Eunho Yang
In this paper, we show that although existing extraction models are able to memorize and recall already seen triples, they cannot generalize effectively for unseen triples.
1 code implementation • NeurIPS 2020 • Geondo Park, June Yong Yang, Sung Ju Hwang, Eunho Yang
Neural networks embedded in safety-sensitive applications such as self-driving cars and wearable health monitors rely on two important techniques: input attribution for hindsight analysis and network compression to reduce its size for edge-computing.