Search Results for author: June Yong Yang

Found 6 papers, 1 papers with code

Does it Really Generalize Well on Unseen Data? Systematic Evaluation of Relational Triple Extraction Methods

no code implementations NAACL 2022 Juhyuk Lee, Min-Joong Lee, June Yong Yang, Eunho Yang

To keep a knowledge graph up-to-date, an extractor needs not only the ability to recall the triples it encountered during training, but also the ability to extract the new triples from the context that it has never seen before.

Knowledge Graphs Memorization

No Token Left Behind: Reliable KV Cache Compression via Importance-Aware Mixed Precision Quantization

no code implementations28 Feb 2024 June Yong Yang, Byeongwook Kim, Jeongin Bae, Beomseok Kwon, Gunho Park, Eunho Yang, Se Jung Kwon, Dongsoo Lee

Key-Value (KV) Caching has become an essential technique for accelerating the inference speed and throughput of generative Large Language Models~(LLMs).

Quantization

Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing

no code implementations16 Dec 2021 Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang

However, they now suffer from lack of sample diversification as they always deterministically select regions with maximum saliency, injecting bias into the augmented data.

Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation

no code implementations2 Dec 2021 Yeonsung Jung, Hajin Shim, June Yong Yang, Eunho Yang

Deep neural networks (DNNs), despite their impressive ability to generalize over-capacity networks, often rely heavily on malignant bias as shortcuts instead of task-related information for discriminative tasks.

Contrastive Learning Translation

Stop just recalling memorized relations: Extracting Unseen Relational Triples from the context

no code implementations29 Sep 2021 Juhyuk Lee, Min-Joong Lee, June Yong Yang, Eunho Yang

In this paper, we show that although existing extraction models are able to memorize and recall already seen triples, they cannot generalize effectively for unseen triples.

Knowledge Graphs Memorization

Attribution Preservation in Network Compression for Reliable Network Interpretation

1 code implementation NeurIPS 2020 Geondo Park, June Yong Yang, Sung Ju Hwang, Eunho Yang

Neural networks embedded in safety-sensitive applications such as self-driving cars and wearable health monitors rely on two important techniques: input attribution for hindsight analysis and network compression to reduce its size for edge-computing.

Edge-computing Network Interpretation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.