no code implementations • CVPR 2023 • Siwon Kim, Jinoh Oh, Sungjin Lee, Seunghak Yu, Jaeyoung Do, Tara Taghavi
In this paper, we propose counterfactual explanation with text-driven concepts (CounTEX), where the concepts are defined only from text by leveraging a pre-trained multi-modal joint embedding space without additional concept-annotated datasets.
no code implementations • ICCV 2023 • Jungbeom Lee, Sungjin Lee, Jinseok Nam, Seunghak Yu, Jaeyoung Do, Tara Taghavi
Referring image segmentation (RIS) aims to localize the object in an image referred by a natural language expression.
no code implementations • RANLP 2021 • Seunghak Yu, Giovanni Da San Martino, Mitra Mohtarami, James Glass, Preslav Nakov
Online users today are exposed to misleading and propagandistic news articles and media posts on a daily basis.
1 code implementation • NAACL 2022 • Hongyin Luo, Shang-Wen Li, Mingye Gao, Seunghak Yu, James Glass
Pretrained language models have significantly improved the performance of downstream language understanding tasks, including extractive question answering, by providing high-quality contextualized word embeddings.
Ranked #1 on Question Answering on MRQA out-of-domain
Extractive Question-Answering Machine Reading Comprehension +6
no code implementations • 20 Aug 2020 • Seunghak Yu, Tianxing He, James Glass
Knowledge graphs (KGs) have the advantage of providing fine-grained detail for question-answering systems.
no code implementations • 15 Jul 2020 • Giovanni Da San Martino, Stefano Cresci, Alberto Barron-Cedeno, Seunghak Yu, Roberto Di Pietro, Preslav Nakov
Propaganda campaigns aim at influencing people's mindset with the purpose of advancing a specific agenda.
no code implementations • ACL 2020 • Giovanni Da San Martino, Shaden Shaar, Yifan Zhang, Seunghak Yu, Alberto Barrón-Cedeño, Preslav Nakov
However, little attention has been paid to the specific rhetorical and psychological techniques used to convey propaganda messages.
no code implementations • 15 Nov 2019 • Seunghak Yu, Giovanni Da San Martino, Preslav Nakov
Many recent political events, like the 2016 US Presidential elections or the 2018 Brazilian elections have raised the attention of institutions and of the general public on the role of Internet and social media in influencing the outcome of these events.
no code implementations • IJCNLP 2019 • Giovanni Da San Martino, Seunghak Yu, Alberto Barr{\'o}n-Cede{\~n}o, Rostislav Petrov, Preslav Nakov
Propaganda aims at influencing people{'}s mindset with the purpose of advancing a specific agenda.
no code implementations • 6 Oct 2019 • Giovanni Da San Martino, Seunghak Yu, Alberto Barrón-Cedeño, Rostislav Petrov, Preslav Nakov
Propaganda aims at influencing people's mindset with the purpose of advancing a specific agenda.
no code implementations • 27 Aug 2019 • Heriberto Cuayáhuitl, Donghyeon Lee, Seonghan Ryu, Yongjin Cho, Sungja Choi, Satish Indurthi, Seunghak Yu, Hyungtak Choi, Inchul Hwang, Jihie Kim
Experimental results using chitchat data reveal that (1) near human-like dialogue policies can be induced, (2) generalisation to unseen data is a difficult problem, and (3) training an ensemble of chatbot agents is essential for improved performance over using a single agent.
1 code implementation • CVPR 2019 • Idan Schwartz, Seunghak Yu, Tamir Hazan, Alexander Schwing
We address this issue and develop a general attention mechanism for visual dialog which operates on any number of data utilities.
Ranked #1 on Visual Dialog on VisDial v0.9 val
no code implementations • EMNLP 2018 • Iryna Haponchyk, Antonio Uva, Seunghak Yu, Olga Uryupina, Aless Moschitti, ro
Modern automated dialog systems require complex dialog managers able to deal with user intent triggered by high-level semantic questions.
no code implementations • EMNLP 2018 • Seohyun Back, Seunghak Yu, Sathish Reddy Indurthi, Jihie Kim, Jaegul Choo
Machine reading comprehension helps machines learn to utilize most of the human knowledge written in the form of text.
Ranked #27 on Question Answering on TriviaQA (using extra training data)
no code implementations • EMNLP 2018 • Sathish Reddy Indurthi, Seunghak Yu, Seohyun Back, Heriberto Cuay{\'a}huitl
In recent years many deep neural networks have been proposed to solve Reading Comprehension (RC) tasks.
Ranked #4 on Question Answering on NarrativeQA
1 code implementation • COLING 2018 • Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim
Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation.
no code implementations • WS 2018 • Seunghak Yu, Sathish Reddy Indurthi, Seohyun Back, Haejun Lee
Reading Comprehension (RC) of text is one of the fundamental tasks in natural language processing.
Ranked #69 on Question Answering on SQuAD1.1
no code implementations • WS 2017 • Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim
Language models for agglutinative languages have always been hindered in past due to myriad of agglutinations possible to any given word through various affixes.
1 code implementation • 6 Jul 2017 • Seunghak Yu, Nilesh Kulkarni, Haejun Lee, Jihie Kim
Recent developments in deep learning with application to language modeling have led to success in tasks of text processing, summarizing and machine translation.
1 code implementation • 26 Nov 2016 • Heriberto Cuayáhuitl, Seunghak Yu, Ashley Williamson, Jacob Carse
Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems.