no code implementations • NAACL 2022 • Juhyuk Lee, Min-Joong Lee, June Yong Yang, Eunho Yang
To keep a knowledge graph up-to-date, an extractor needs not only the ability to recall the triples it encountered during training, but also the ability to extract the new triples from the context that it has never seen before.
no code implementations • 29 Sep 2021 • Juhyuk Lee, Min-Joong Lee, June Yong Yang, Eunho Yang
In this paper, we show that although existing extraction models are able to memorize and recall already seen triples, they cannot generalize effectively for unseen triples.
no code implementations • 1 Jan 2021 • Tae Gyoon Kang, Ho-Gyeong Kim, Min-Joong Lee, Jihyun Lee, Seongmin Ok, Hoshik Lee, Young Sang Choi
Transformers with soft attention have been widely adopted to various sequence-to-sequence tasks.
no code implementations • 14 Aug 2020 • Taewoo Lee, Min-Joong Lee, Tae Gyoon Kang, Seokyeoung Jung, Minseok Kwon, Yeona Hong, Jungin Lee, Kyoung-Gu Woo, Ho-Gyeong Kim, Jiseung Jeong, Ji-Hyun Lee, Hosik Lee, Young Sang Choi
We propose an adapter based multi-domain Transformer based language model (LM) for Transformer ASR.