Search Results for author: Joris Baan

Found 7 papers, 2 papers with code

Interpreting Predictive Probabilities: Model Confidence or Human Label Variation?

no code implementations25 Feb 2024 Joris Baan, Raquel Fernández, Barbara Plank, Wilker Aziz

With the rise of increasingly powerful and user-facing NLP systems, there is growing interest in assessing whether they have a good representation of uncertainty by evaluating the quality of their predictive distribution over outcomes.

Position

Uncertainty in Natural Language Generation: From Theory to Applications

no code implementations28 Jul 2023 Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz

Recent advances of powerful Language Models have allowed Natural Language Generation (NLG) to emerge as an important technology that can not only perform traditional tasks like summarisation or translation, but also serve as a natural language interface to a variety of applications.

Active Learning Text Generation

What Comes Next? Evaluating Uncertainty in Neural Text Generators Against Human Production Variability

1 code implementation19 May 2023 Mario Giulianelli, Joris Baan, Wilker Aziz, Raquel Fernández, Barbara Plank

In Natural Language Generation (NLG) tasks, for any input, multiple communicative goals are plausible, and any goal can be put into words, or produced, in multiple ways.

Text Generation

Stop Measuring Calibration When Humans Disagree

1 code implementation28 Oct 2022 Joris Baan, Wilker Aziz, Barbara Plank, Raquel Fernández

Calibration is a popular framework to evaluate whether a classifier knows when it does not know - i. e., its predictive probabilities are a good indication of how likely a prediction is to be correct.

Understanding Multi-Head Attention in Abstractive Summarization

no code implementations10 Nov 2019 Joris Baan, Maartje ter Hoeve, Marlies van der Wees, Anne Schuth, Maarten de Rijke

Finally, we find that relative positions heads seem integral to summarization performance and persistently remain after pruning.

Abstractive Text Summarization Machine Translation +1

Do Transformer Attention Heads Provide Transparency in Abstractive Summarization?

no code implementations1 Jul 2019 Joris Baan, Maartje ter Hoeve, Marlies van der Wees, Anne Schuth, Maarten de Rijke

We investigate whether distributions calculated by different attention heads in a transformer architecture can be used to improve transparency in the task of abstractive summarization.

Abstractive Text Summarization valid

On the Realization of Compositionality in Neural Networks

no code implementations WS 2019 Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni

We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task.

Cannot find the paper you are looking for? You can Submit a new open access paper.