no code implementations • SIGDIAL (ACL) 2022 • Alessandro Suglia, Bhathiya Hemanthage, Malvina Nikandrou, George Pantazopoulos, Amit Parekh, Arash Eshghi, Claudio Greco, Ioannis Konstas, Oliver Lemon, Verena Rieser
We demonstrate EMMA, an embodied multimodal agent which has been developed for the Alexa Prize SimBot challenge.
1 code implementation • ACL (splurobonlp) 2021 • Miltiadis Marios Katsakioris, Ioannis Konstas, Pierre Yves Mignotte, Helen Hastie
Robust situated dialog requires the ability to process instructions based on spatial information, which may or may not be available.
no code implementations • 5 Dec 2023 • Alessandro Suglia, Ioannis Konstas, Oliver Lemon
Our analysis of the literature provides evidence that future work should be focusing on interactive games where communication in Natural Language is important to resolve ambiguities about object referents and action plans and that physical embodiment is essential to understand the semantics of situations and events.
no code implementations • 7 Nov 2023 • Georgios Pantazopoulos, Malvina Nikandrou, Amit Parekh, Bhathiya Hemanthage, Arash Eshghi, Ioannis Konstas, Verena Rieser, Oliver Lemon, Alessandro Suglia
Interactive and embodied tasks pose at least two fundamental challenges to existing Vision & Language (VL) models, including 1) grounding language in trajectories of actions and observations, and 2) referential disambiguation.
1 code implementation • 31 Jul 2023 • Vevake Balaraman, Arash Eshghi, Ioannis Konstas, Ioannis Papaioannou
We demonstrate the usefulness of the data by training and evaluating strong baseline models for executing TPRs.
1 code implementation • 31 May 2023 • Alex Foote, Neel Nanda, Esben Kran, Ioannis Konstas, Shay Cohen, Fazl Barez
Conventional methods require examination of examples with strong neuron activation and manual identification of patterns to decipher the concepts a neuron responds to.
1 code implementation • 27 May 2023 • Jason Hoelscher-Obermaier, Julia Persson, Esben Kran, Ioannis Konstas, Fazl Barez
We use this improved benchmark to evaluate recent model editing techniques and find that they suffer from low specificity.
no code implementations • 25 May 2023 • Sabrina Chiesurin, Dimitris Dimakopoulos, Marco Antonio Sobrevilla Cabezudo, Arash Eshghi, Ioannis Papaioannou, Verena Rieser, Ioannis Konstas
Large language models are known to produce output which sounds fluent and convincing, but is also often wrong, e. g. "unfaithful" with respect to a rationale as retrieved from a knowledge base.
Conversational Question Answering Open-Domain Question Answering
1 code implementation • 24 May 2023 • Antonio Valerio Miceli-Barone, Fazl Barez, Ioannis Konstas, Shay B. Cohen
Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming.
no code implementations • 10 May 2023 • Nikolas Vitsakis, Amit Parekh, Tanvi Dinkar, Gavin Abercrombie, Ioannis Konstas, Verena Rieser
There are two competing approaches for modelling annotator disagreement: distributional soft-labelling approaches (which aim to capture the level of disagreement) or modelling perspectives of individual annotators or groups thereof.
1 code implementation • 8 Nov 2022 • Alessandro Suglia, José Lopes, Emanuele Bastianelli, Andrea Vanzo, Shubham Agarwal, Malvina Nikandrou, Lu Yu, Ioannis Konstas, Verena Rieser
As the course of a game is unpredictable, so are commentaries, which makes them a unique resource to investigate dynamic language grounding.
1 code implementation • 13 Oct 2022 • Zdeněk Kasner, Ioannis Konstas, Ondřej Dušek
Pretrained language models (PLMs) for data-to-text (D2T) generation can use human-readable data labels such as column headings, keys, or relation names to generalize to out-of-domain examples.
no code implementations • 30 Sep 2022 • Mavina Nikandrou, Lu Yu, Alessandro Suglia, Ioannis Konstas, Verena Rieser
We first propose three plausible task formulations and demonstrate their impact on the performance of continual learning algorithms.
1 code implementation • EMNLP (NLLP) 2021 • Ruben Kruiper, Ioannis Konstas, Alasdair Gray, Farhad Sadeghineko, Richard Watson, Bimal Kumar
Automated Compliance Checking (ACC) systems aim to semantically parse building regulations to a set of rules.
1 code implementation • Findings (EMNLP) 2021 • Xinnuo Xu, Ondřej Dušek, Shashi Narayan, Verena Rieser, Ioannis Konstas
We show via data analysis that it's not only the models which are to blame: more than 27% of facts mentioned in the gold summaries of MiRANews are better grounded on assisting documents than in the main source articles.
1 code implementation • ACL 2021 • Xinnuo Xu, Ondřej Dušek, Verena Rieser, Ioannis Konstas
We present AGGGEN (pronounced 'again'), a data-to-text model which re-introduces two explicit sentence planning stages into neural data-to-text systems: input ordering and input aggregation.
1 code implementation • ACL 2021 • Karin Sevegnani, David M. Howcroft, Ioannis Konstas, Verena Rieser
Mixed initiative in open-domain dialogue requires a system to pro-actively introduce new topics.
no code implementations • EACL 2021 • Alessandro Suglia, Yonatan Bisk, Ioannis Konstas, Antonio Vergari, Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon
Guessing games are a prototypical instance of the "learning by interacting" paradigm.
no code implementations • COLING 2020 • Alessandro Suglia, Antonio Vergari, Ioannis Konstas, Yonatan Bisk, Emanuele Bastianelli, Andrea Vanzo, Oliver Lemon
However, as shown by Suglia et al. (2020), existing models fail to learn truly multi-modal representations, relying instead on gold category labels for objects in the scene both at training and inference time.
no code implementations • ACL 2020 • Xinnuo Xu, Ond{\v{r}}ej Du{\v{s}}ek, Jingyi Li, Verena Rieser, Ioannis Konstas
Abstractive summarisation is notoriously hard to evaluate since standard word-overlap-based metrics are insufficient.
no code implementations • WS 2020 • Kenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew Finch, Graham Neubig, Xi-An Li, Alex Birch, ra
We describe the finding of the Fourth Workshop on Neural Generation and Translation, held in concert with the annual conference of the Association for Computational Linguistics (ACL 2020).
no code implementations • ACL 2020 • Alessandro Suglia, Ioannis Konstas, Andrea Vanzo, Emanuele Bastianelli, Desmond Elliott, Stella Frank, Oliver Lemon
To remedy this, we present GROLLA, an evaluation framework for Grounded Language Learning with Attributes with three sub-tasks: 1) Goal-oriented evaluation; 2) Object attribute prediction evaluation; and 3) Zero-shot evaluation.
1 code implementation • LREC 2020 • Ruben Kruiper, Julian F. V. Vincent, Jessica Chen-Burger, Marc P. Y. Desmulliez, Ioannis Konstas
Nature has inspired various ground-breaking technological developments in applications ranging from robotics to aerospace engineering and the manufacturing of medical devices.
1 code implementation • ACL 2020 • Ruben Kruiper, Julian F. V. Vincent, Jessica Chen-Burger, Marc P. Y. Desmulliez, Ioannis Konstas
First, we present the Focused Open Biological Information Extraction (FOBIE) dataset and use FOBIE to train a state-of-the-art narrow scientific IE system to extract trade-off relations and arguments that are central to biology texts.
2 code implementations • ACL 2020 • Shubham Agarwal, Trung Bui, Joon-Young Lee, Ioannis Konstas, Verena Rieser
Visual Dialog involves "understanding" the dialog history (what has been discussed previously) and the current question (what is asked), in addition to grounding information in the image, to generate the correct response.
no code implementations • WS 2019 • Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh
This document describes the findings of the Third Workshop on Neural Generation and Translation, held in concert with the annual conference of the Empirical Methods in Natural Language Processing (EMNLP 2019).
1 code implementation • WS 2019 • Ondřej Dušek, Karin Sevegnani, Ioannis Konstas, Verena Rieser
We present a recurrent neural network based system for automatic quality estimation of natural language generation (NLG) outputs, which jointly learns to assign numerical ratings to individual outputs and to provide pairwise rankings of two different outputs.
no code implementations • 14 Sep 2019 • Angus Addlesee, Arash Eshghi, Ioannis Konstas
Dialogue technologies such as Amazon's Alexa have the potential to transform the healthcare industry.
1 code implementation • NAACL 2019 • Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, Alex Potamianos, ros
The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.
no code implementations • WS 2019 • Miltiadis Marios Katsakioris, Helen Hastie, Ioannis Konstas, Atanas Laskov
As autonomous systems become more commonplace, we need a way to easily and naturally communicate to them our goals and collaboratively come up with a plan on how to achieve these goals.
1 code implementation • 7 Apr 2019 • Christos Baziotis, Ion Androutsopoulos, Ioannis Konstas, Alexandros Potamianos
The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.
1 code implementation • WS 2018 • Shubham Agarwal, Ondrej Dusek, Ioannis Konstas, Verena Rieser
In this work, we investigate the task of textual response generation in a multimodal task-oriented dialogue system.
1 code implementation • WS 2018 • Shubham Agarwal, Ondrej Dusek, Ioannis Konstas, Verena Rieser
Multimodal search-based dialogue is a challenging new task: It extends visually grounded question answering systems into multi-turn conversations with access to an external database.
1 code implementation • EMNLP 2018 • Xinnuo Xu, Ond{\v{r}}ej Du{\v{s}}ek, Ioannis Konstas, Verena Rieser
We present three enhancements to existing encoder-decoder models for open-domain conversational agents, aimed at effectively modeling coherence and promoting output diversity: (1) We introduce a measure of coherence as the GloVe embedding similarity between the dialogue context and the generated response, (2) we filter our training corpora based on the measure of coherence to obtain topically coherent and lexically diverse context-response pairs, (3) we then train a response generator using a conditional variational autoencoder model that incorporates the measure of coherence as a latent variable and uses a context gate to guarantee topical consistency with the context and promote lexical diversity.
2 code implementations • 18 Sep 2018 • Xinnuo Xu, Ondřej Dušek, Ioannis Konstas, Verena Rieser
We present three enhancements to existing encoder-decoder models for open-domain conversational agents, aimed at effectively modeling coherence and promoting output diversity: (1) We introduce a measure of coherence as the GloVe embedding similarity between the dialogue context and the generated response, (2) we filter our training corpora based on the measure of coherence to obtain topically coherent and lexically diverse context-response pairs, (3) we then train a response generator using a conditional variational autoencoder model that incorporates the measure of coherence as a latent variable and uses a context gate to guarantee topical consistency with the context and promote lexical diversity.
1 code implementation • EMNLP 2018 • Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer
To study this phenomenon, we introduce the task of generating class member functions given English documentation and the programmatic context provided by the rest of the class.
no code implementations • ACL 2017 • Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, Luke Zettlemoyer
We present an approach to rapidly and easily build natural language interfaces to databases for new domains, whose performance improves over time based on user feedback, and requires minimal intervention.
Ranked #1 on SQL Parsing on Restaurants
5 code implementations • ACL 2017 • Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, Luke Zettlemoyer
Sequence-to-sequence models have shown strong performance across a broad range of applications.
Ranked #6 on AMR Parsing on LDC2015E86
no code implementations • WS 2017 • Roy Schwartz, Maarten Sap, Ioannis Konstas, Leila Zilles, Yejin Choi, Noah A. Smith
This paper describes University of Washington NLP{'}s submission for the Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem 2017) shared task{---}the Story Cloze Task.
1 code implementation • CONLL 2017 • Roy Schwartz, Maarten Sap, Ioannis Konstas, Li Zilles, Yejin Choi, Noah A. Smith
A writer's style depends not just on personal traits but also on her intent and mental state.
no code implementations • EMNLP 2016 • Rik Koncel-Kedziorski, Ioannis Konstas, Luke Zettlemoyer, Hannaneh Hajishirzi
Texts present coherent stories that have a particular theme or overall setting, for example science fiction or western.