1 code implementation • COLING 2022 • Reinald Kim Amplayo, Kang Min Yoo, Sang-Woo Lee
Metadata attributes (e. g., user and product IDs from reviews) can be incorporated as additional inputs to neural-based NLP models, by expanding the architecture of the models to improve performance.
1 code implementation • 23 Oct 2023 • Hyuhng Joon Kim, Hyunsoo Cho, Sang-Woo Lee, Junyeob Kim, Choonghyun Park, Sang-goo Lee, Kang Min Yoo, Taeuk Kim
When deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs.
1 code implementation • 27 May 2023 • Deokjae Lee, JunYeong Lee, Jung-Woo Ha, Jin-Hwa Kim, Sang-Woo Lee, Hwaran Lee, Hyun Oh Song
To this end, we propose Bayesian red teaming (BRT), novel query-efficient black-box red teaming methods based on Bayesian optimization, which iteratively identify diverse positive test cases leading to model failures by utilizing the pre-defined user input pool and the past evaluations.
1 code implementation • 23 May 2023 • Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung
We first present CAMBIGNQ, a dataset consisting of 5, 654 ambiguous questions, each with relevant passages, possible answers, and a clarification question.
no code implementations • 21 Dec 2022 • Hyunsoo Cho, Hyuhng Joon Kim, Junyeob Kim, Sang-Woo Lee, Sang-goo Lee, Kang Min Yoo, Taeuk Kim
Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning.
no code implementations • 20 Dec 2022 • Sang-Woo Lee, Sungdong Kim, Donghyeon Ko, Donghoon Ham, Youngki Hong, Shin Ah Oh, Hyunhoon Jung, Wangkyo Jung, Kyunghyun Cho, Donghyun Kwak, Hyungsuk Noh, WooMyoung Park
Task-oriented dialogue (TOD) systems are mainly based on the slot-filling-based TOD (SF-TOD) framework, in which dialogues are broken down into smaller, controllable units (i. e., slots) to fulfill a specific task.
no code implementations • 7 Dec 2022 • Kyuyong Shin, Hanock Kwak, Wonjae Kim, Jisu Jeong, Seungjae Jung, Kyung-Min Kim, Jung-Woo Ha, Sang-Woo Lee
Recent studies have proposed unified user modeling frameworks that leverage user behavior data from various applications.
no code implementations • 17 Oct 2022 • Sanghwan Bae, Donghyun Kwak, Soyoung Kang, Min Young Lee, Sungdong Kim, Yuin Jeong, Hyeri Kim, Sang-Woo Lee, WooMyoung Park, Nako Sung
Remembering important information from the past and continuing to talk about it in the present are crucial in long-term conversations.
1 code implementation • COLING 2022 • Xiaodong Gu, Zhaowei Zhang, Sang-Woo Lee, Kang Min Yoo, Jung-Woo Ha
While Transformers have had significant success in paragraph generation, they treat sentences as linear sequences of tokens and often neglect their hierarchical information.
no code implementations • 31 May 2022 • Young-Ho Kim, Sungdong Kim, Minsuk Chang, Sang-Woo Lee
Current natural language interaction for self-tracking tools largely depends on bespoke implementation optimized for a specific tracking theme and data format, which is neither generalizable nor scalable to a tremendous design space of self-tracking.
no code implementations • 25 May 2022 • Kang Min Yoo, Junyeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang-goo Lee, Taeuk Kim
Despite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive.
1 code implementation • 25 May 2022 • Jin-Hwa Kim, Yunji Kim, Jiyoung Lee, Kang Min Yoo, Sang-Woo Lee
Based on a recent trend that multimodal generative evaluations exploit a vison-and-language pre-trained model, we propose the negative Gaussian cross-mutual information using the CLIP features as a unified metric, coined by Mutual Information Divergence (MID).
Ranked #1 on Human Judgment Classification on Pascal-50S
Hallucination Pair-wise Detection (1-ref) Hallucination Pair-wise Detection (4-ref) +5
1 code implementation • Findings (ACL) 2022 • Yeon Seonwoo, Juhee Son, Jiho Jin, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh
These models have shown a significant increase in inference speed, but at the cost of lower QA performance compared to the retriever-reader models.
1 code implementation • NAACL 2022 • Sanghwan Bae, Donghyun Kwak, Sungdong Kim, Donghoon Ham, Soyoung Kang, Sang-Woo Lee, WooMyoung Park
In this work, we study the challenge of imposing roles on open-domain dialogue systems, with the goal of making the systems maintain consistent roles while conversing naturally with humans.
no code implementations • NAACL 2022 • Seongjin Shin, Sang-Woo Lee, Hwijeen Ahn, Sungdong Kim, HyoungSeok Kim, Boseop Kim, Kyunghyun Cho, Gichang Lee, WooMyoung Park, Jung-Woo Ha, Nako Sung
Many recent studies on large-scale language models have reported successful in-context zero- and few-shot learning ability.
no code implementations • Findings (ACL) 2022 • Kyungjae Lee, Wookje Han, Seung-won Hwang, Hwaran Lee, Joonsuk Park, Sang-Woo Lee
To this end, we first propose a novel task--Continuously-updated QA (CuQA)--in which multiple large-scale updates are made to LMs, and the performance is measured with respect to the success in adding and updating knowledge while retaining existing knowledge.
no code implementations • 4 Nov 2021 • Xiaodong Gu, Kang Min Yoo, Sang-Woo Lee
Pre-trained language models (PLM) have marked a huge leap in neural dialogue modeling.
no code implementations • 16 Sep 2021 • Reinald Kim Amplayo, Kang Min Yoo, Sang-Woo Lee
Metadata attributes (e. g., user and product IDs from reviews) can be incorporated as additional inputs to neural-based NLP models, by modifying the architecture of the models, in order to improve their performance.
2 code implementations • EMNLP 2021 • Boseop Kim, HyoungSeok Kim, Sang-Woo Lee, Gichang Lee, Donghyun Kwak, Dong Hyeon Jeon, Sunghyun Park, Sungju Kim, Seonhoon Kim, Dongpil Seo, Heungsub Lee, Minyoung Jeong, Sungjae Lee, Minsub Kim, Suk Hyun Ko, Seokhun Kim, Taeyong Park, Jinuk Kim, Soyoung Kang, Na-Hyeon Ryu, Kang Min Yoo, Minsuk Chang, Soobin Suh, Sookyo In, Jinseong Park, Kyungduk Kim, Hiun Kim, Jisu Jeong, Yong Goo Yeo, Donghoon Ham, Dongju Park, Min Young Lee, Jaewook Kang, Inho Kang, Jung-Woo Ha, WooMyoung Park, Nako Sung
GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data.
1 code implementation • Findings (ACL) 2021 • Yeon Seonwoo, Sang-Woo Lee, Ji-Hoon Kim, Jung-Woo Ha, Alice Oh
In multi-hop QA, answering complex questions entails iterative document retrieval for finding the missing entity of the question.
1 code implementation • ACL 2021 • Sungdong Kim, Minsuk Chang, Sang-Woo Lee
We propose NeuralWOZ, a novel dialogue collection framework that uses model-based dialogue simulation.
1 code implementation • Findings (EMNLP) 2021 • Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, Woomyeong Park
Large-scale language models such as GPT-3 are excellent few-shot learners, allowing them to be controlled via natural text prompts.
no code implementations • 23 Oct 2020 • Minjeong Kim, Gyuwan Kim, Sang-Woo Lee, Jung-Woo Ha
Language model pre-training has shown promising results in various downstream tasks.
no code implementations • 6 Jul 2020 • Sang-Woo Lee, Hyunhoon Jung, SukHyun Ko, Sunyoung Kim, Hyewon Kim, Kyoungtae Doh, Hyunjung Park, Joseph Yeo, Sang-Houn Ok, Joonhaeng Lee, Sungsoon Lim, Minyoung Jeong, Seongjae Choi, SeungTae Hwang, Eun-Young Park, Gwang-Ja Ma, Seok-Joo Han, Kwang-Seung Cha, Nako Sung, Jung-Woo Ha
Tracking suspected cases of COVID-19 is crucial to suppressing the spread of COVID-19 pandemic.
1 code implementation • 20 Apr 2020 • Jung-Woo Ha, Kihyun Nam, Jingu Kang, Sang-Woo Lee, Sohee Yang, Hyunhoon Jung, Eunmi Kim, Hyeji Kim, Soojin Kim, Hyun Ah Kim, Kyoungtae Doh, Chan Kyu Lee, Nako Sung, Sunghun Kim
Automatic speech recognition (ASR) via call is essential for various applications, including AI for contact center (AICC) services.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
3 code implementations • ACL 2020 • Sungdong Kim, Sohee Yang, Gyuwan Kim, Sang-Woo Lee
This mechanism consists of two steps: (1) predicting state operation on each of the memory slots, and (2) overwriting the memory with new values, of which only a few are generated according to the predicted state operations.
Ranked #10 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.0
Dialogue State Tracking Multi-domain Dialogue State Tracking
1 code implementation • ICLR 2019 • Sang-Woo Lee, Tong Gao, Sohee Yang, Jaejun Yoo, Jung-Woo Ha
Answerer in Questioner's Mind (AQM) is an information-theoretic framework that has been recently proposed for task-oriented dialog systems.
1 code implementation • NeurIPS 2018 • Sang-Woo Lee, Yu-Jung Heo, Byoung-Tak Zhang
Goal-oriented dialogue tasks occur when a questioner asks an action-oriented question and an answerer responds with the intent of letting the questioner know a correct action to take.
1 code implementation • NeurIPS 2017 • Sang-Woo Lee, Jin-Hwa Kim, Jaehyun Jun, Jung-Woo Ha, Byoung-Tak Zhang
Catastrophic forgetting is a problem of neural networks that loses the information of the first task after training the second task.
no code implementations • 11 Mar 2017 • Sungtae Lee, Sang-Woo Lee, Jinyoung Choi, Dong-Hyun Kwak, Byoung-Tak Zhang
To solve this issue, the subgoal and option framework have been proposed.
1 code implementation • NeurIPS 2016 • Jin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang
We present Multimodal Residual Networks (MRN) for the multimodal residual learning of visual question-answering, which extends the idea of the deep residual learning.
no code implementations • 15 Jun 2015 • Sang-Woo Lee, Min-Oh Heo, Jiwon Kim, Jeonghee Kim, Byoung-Tak Zhang
The proposed architecture consists of deep representation learners and fast learnable shallow kernel networks, both of which synergize to track the information of new data.