no code implementations • 25 Feb 2024 • Joris Baan, Raquel Fernández, Barbara Plank, Wilker Aziz
With the rise of increasingly powerful and user-facing NLP systems, there is growing interest in assessing whether they have a good representation of uncertainty by evaluating the quality of their predictive distribution over outcomes.
no code implementations • 28 Jul 2023 • Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz
Recent advances of powerful Language Models have allowed Natural Language Generation (NLG) to emerge as an important technology that can not only perform traditional tasks like summarisation or translation, but also serve as a natural language interface to a variety of applications.
1 code implementation • 19 May 2023 • Mario Giulianelli, Joris Baan, Wilker Aziz, Raquel Fernández, Barbara Plank
In Natural Language Generation (NLG) tasks, for any input, multiple communicative goals are plausible, and any goal can be put into words, or produced, in multiple ways.
1 code implementation • 28 Oct 2022 • Joris Baan, Wilker Aziz, Barbara Plank, Raquel Fernández
Calibration is a popular framework to evaluate whether a classifier knows when it does not know - i. e., its predictive probabilities are a good indication of how likely a prediction is to be correct.
no code implementations • 10 Nov 2019 • Joris Baan, Maartje ter Hoeve, Marlies van der Wees, Anne Schuth, Maarten de Rijke
Finally, we find that relative positions heads seem integral to summarization performance and persistently remain after pruning.
no code implementations • 1 Jul 2019 • Joris Baan, Maartje ter Hoeve, Marlies van der Wees, Anne Schuth, Maarten de Rijke
We investigate whether distributions calculated by different attention heads in a transformer architecture can be used to improve transparency in the task of abstractive summarization.
no code implementations • WS 2019 • Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni
We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task.