1 code implementation • EMNLP 2021 • Lucia Donatelli, Theresa Schmidt, Debanjali Biswas, Arne Köhn, Fangzhou Zhai, Alexander Koller
Recipe texts are an idiosyncratic form of instructional language that pose unique challenges for automatic understanding.
no code implementations • COLING 2022 • Fangzhou Zhai, Vera Demberg, Alexander Koller
Script knowledge is useful to a variety of NLP tasks.
no code implementations • COLING (CRAC) 2020 • Tatiana Anikina, Alexander Koller, Michael Roth
This work addresses coreference resolution in Abstract Meaning Representation (AMR) graphs, a popular formalism for semantic parsing.
no code implementations • SIGDIAL (ACL) 2020 • Arne Köhn, Julia Wichlacz, Christine Schäfer, Álvaro Torralba, Joerg Hoffmann, Alexander Koller
We present a comprehensive platform to run human-computer experiments where an agent instructs a human in Minecraft, a 3D blocksworld environment.
1 code implementation • *SEM (NAACL) 2022 • Pia Weißenhorn, Lucia Donatelli, Alexander Koller
We show how the AM parser, a compositional semantic parser (Groschwitz et al., 2018) can solve compositional generalization on the COGS dataset.
no code implementations • 18 Jan 2024 • Yuekun Yao, Alexander Koller
Compositional generalization, the ability to predict complex meanings from training on simpler sentences, poses challenges for powerful pretrained seq2seq models.
1 code implementation • 16 Nov 2023 • Katharina Stein, Daniel Fišer, Jörg Hoffmann, Alexander Koller
LLMs are being increasingly used for planning-style tasks, but their capabilities for planning and reasoning are poorly understood.
no code implementations • 15 Nov 2023 • Yuekun Yao, Alexander Koller
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
no code implementations • 8 Nov 2023 • Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, Tushar Khot
Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment.
1 code implementation • 23 Oct 2023 • Bingzhi Li, Lucia Donatelli, Alexander Koller, Tal Linzen, Yuekun Yao, Najoung Kim
The goal of compositional generalization benchmarks is to evaluate how well models generalize to new complex linguistic expressions.
1 code implementation • 2 Oct 2023 • Matthew Finlayson, John Hewitt, Alexander Koller, Swabha Swayamdipta, Ashish Sabharwal
We provide a theoretical explanation for the effectiveness of the truncation sampling by proving that truncation methods that discard tokens below some probability threshold (the most common type of truncation) can guarantee that all sampled tokens have nonzero true probability.
no code implementations • 1 Oct 2023 • Matthias Lindemann, Alexander Koller, Ivan Titov
Strong inductive biases enable learning from little data and help generalization outside of the training distribution.
1 code implementation • 26 May 2023 • Matthias Lindemann, Alexander Koller, Ivan Titov
Our model outperforms pretrained seq2seq models and prior work on realistic semantic parsing tasks that require generalization to longer examples.
no code implementations • 15 May 2023 • Simone Tedeschi, Johan Bos, Thierry Declerck, Jan Hajic, Daniel Hershcovich, Eduard H. Hovy, Alexander Koller, Simon Krek, Steven Schockaert, Rico Sennrich, Ekaterina Shutova, Roberto Navigli
In the last five years, there has been a significant focus in Natural Language Processing (NLP) on developing larger Pretrained Language Models (PLMs) and introducing benchmarks such as SuperGLUE and SQuAD to measure their abilities in language understanding, reasoning, and reading comprehension.
1 code implementation • 27 Apr 2023 • Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A. Smith, Yejin Choi
We find that the task remains extremely challenging, including for GPT-4, whose generated disambiguations are considered correct only 32% of the time in human evaluation, compared to 90% for disambiguations in our dataset.
1 code implementation • 24 Oct 2022 • Yuekun Yao, Alexander Koller
Sequence-to-sequence (seq2seq) models have been successful across many NLP tasks, including ones that require predicting linguistic structure.
1 code implementation • 6 Oct 2022 • Matthias Lindemann, Alexander Koller, Ivan Titov
Seq2seq models have been shown to struggle with compositional generalisation, i. e. generalising to new and potentially more complex structures than seen during training.
no code implementations • 24 Feb 2022 • Pia Weißenhorn, Yuekun Yao, Lucia Donatelli, Alexander Koller
A rapidly growing body of research on compositional generalization investigates the ability of a semantic parser to dynamically recombine linguistic elements seen in training into unseen sequences.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Fangzhou Zhai, Iza {\v{S}}krjanec, Alexander Koller
A crucial step for the exploitation of script knowledge is script parsing, the task of tagging a text with the events and participants from a certain activity.
1 code implementation • ACL (spnlp) 2021 • Jonas Groschwitz, Meaghan Fowlie, Alexander Koller
AM dependency parsing is a method for neural semantic graph parsing that exploits the principle of compositionality.
no code implementations • COLING 2020 • Fangzhou Zhai, Vera Demberg, Alexander Koller
Automatically generated stories need to be not only coherent, but also interesting.
no code implementations • COLING 2020 • Arne Köhn, Julia Wichlacz, Álvaro Torralba, Daniel Höller, Jörg Hoffmann, Alexander Koller
When generating technical instructions, it is often convenient to describe complex objects in the world at different levels of abstraction.
1 code implementation • EMNLP 2020 • Matthias Lindemann, Jonas Groschwitz, Alexander Koller
AM dependency parsing is a linguistically principled method for neural semantic parsing with high accuracy across multiple graphbanks.
1 code implementation • COLING 2020 • Lucia Donatelli, Jonas Groschwitz, Alexander Koller, Matthias Lindemann, Pia Weißenhorn
The emergence of a variety of graph-based meaning representations (MRs) has sparked an important conversation about how to adequately represent semantic structure.
1 code implementation • ACL 2019 • Matthias Lindemann, Jonas Groschwitz, Alexander Koller
Most semantic parsers that map sentences to graph-based meaning representations are hand-designed for specific graphbanks.
no code implementations • ACL 2019 • Antoine Venant, Alexander Koller
We investigate the capacity of mechanisms for compositional semantic parsing to describe relations between sentences and semantic representations.
no code implementations • ACL 2018 • Stefan Grünewald, Sophie Henning, Alexander Koller
Chart constraints, which specify at which string positions a constituent may begin or end, have been shown to speed up chart parsers for PCFGs.
no code implementations • WS 2018 • Nikos Engonopoulos, Christoph Teichmann, Alexander Koller
We present a model which predicts how individual users of a dialog system understand and produce utterances based on user groups.
no code implementations • ACL 2018 • Jonas Groschwitz, Matthias Lindemann, Meaghan Fowlie, Mark Johnson, Alexander Koller
We present a semantic parser for Abstract Meaning Representations which learns to parse strings into tree representations of the compositional structure of an AMR graph.