1 code implementation • CMCL (ACL) 2022 • Ece Takmaz, Sandro Pezzelle, Raquel Fernández
In this work, we use a transformer-based pre-trained multimodal model, CLIP, to shed light on the mechanisms employed by human speakers when referring to visual entities.
1 code implementation • ACL (RepL4NLP) 2021 • Iuliia Parfenova, Desmond Elliott, Raquel Fernández, Sandro Pezzelle
We investigate the representations learned by vision and language models in tasks that require relational reasoning.
1 code implementation • CoNLL (EMNLP) 2021 • Mario Giulianelli, Raquel Fernández
Speakers are thought to use rational information transmission strategies for efficient communication (Genzel and Charniak, 2002; Aylett and Turk, 2004; Jaeger and Levy, 2007).
no code implementations • *SEM (NAACL) 2022 • Samuel Ryb, Mario Giulianelli, Arabella Sinclair, Raquel Fernández
We investigate the extent to which pre-trained language models acquire analytical and deductive logical reasoning capabilities as a side effect of learning word prediction.
1 code implementation • EMNLP 2021 • Mario Giulianelli, Arabella Sinclair, Raquel Fernández
The Uniform Information Density principle states that speakers plan their utterances to reduce fluctuations in the density of the information transmitted.
no code implementations • 14 May 2024 • Esam Ghaleb, Marlou Rasenberg, Wim Pouw, Ivan Toni, Judith Holler, Aslı Özyürek, Raquel Fernández
Conversation requires a substantial amount of coordination between dialogue participants, from managing turn taking to negotiating mutual understanding.
1 code implementation • 23 Apr 2024 • Esam Ghaleb, Ilya Burenko, Marlou Rasenberg, Wim Pouw, Ivan Toni, Peter Uhrig, Anna Wilson, Judith Holler, Aslı Özyürek, Raquel Fernández
Our findings indicate that expanding the speech buffer beyond visual time segments improves performance and that multimodal integration using cross-modal and early fusion techniques outperforms baseline methods using unimodal and late fusion methods.
no code implementations • 25 Feb 2024 • Joris Baan, Raquel Fernández, Barbara Plank, Wilker Aziz
With the rise of increasingly powerful and user-facing NLP systems, there is growing interest in assessing whether they have a good representation of uncertainty by evaluating the quality of their predictive distribution over outcomes.
no code implementations • 9 Feb 2024 • Alberto Testoni, Raquel Fernández
Clarification questions are an essential dialogue tool to signal misunderstanding, ambiguities, and under-specification in language use.
1 code implementation • 2 Feb 2024 • Ece Takmaz, Sandro Pezzelle, Raquel Fernández
There is an intricate relation between the properties of an image and how humans behave while describing the image.
1 code implementation • 26 Oct 2023 • Aditya K Surikuchi, Sandro Pezzelle, Raquel Fernández
A proper evaluation of stories generated for a sequence of images -- the task commonly referred to as visual storytelling -- must consider multiple aspects, such as coherence, grammatical correctness, and visual grounding.
1 code implementation • 23 Oct 2023 • Xinyi Chen, Raquel Fernández, Sandro Pezzelle
Despite the impressive performance achieved by pre-trained language-and-vision models in downstream tasks, it remains an open question whether this reflects a proper understanding of image-text interaction.
1 code implementation • 20 Oct 2023 • Mario Giulianelli, Sarenne Wallbridge, Raquel Fernández
We present information value, a measure which quantifies the predictability of an utterance relative to a set of plausible alternatives.
1 code implementation • 16 Oct 2023 • Jirui Qi, Raquel Fernández, Arianna Bisazza
Finally, we conduct a case study on CLC when new factual associations are inserted in the PLMs via model editing.
1 code implementation • 21 Aug 2023 • Esam Ghaleb, Ilya Burenko, Marlou Rasenberg, Wim Pouw, Peter Uhrig, Judith Holler, Ivan Toni, Aslı Özyürek, Raquel Fernández
Yet, the prevalent approach to automatic gesture detection treats the problem as binary classification, classifying a segment as either containing a gesture or not, thus failing to capture its inherently sequential and contextual nature.
no code implementations • 28 Jul 2023 • Joris Baan, Nico Daheim, Evgenia Ilia, Dennis Ulmer, Haau-Sing Li, Raquel Fernández, Barbara Plank, Rico Sennrich, Chrysoula Zerva, Wilker Aziz
Recent advances of powerful Language Models have allowed Natural Language Generation (NLG) to emerge as an important technology that can not only perform traditional tasks like summarisation or translation, but also serve as a natural language interface to a variety of applications.
1 code implementation • 31 May 2023 • Ece Takmaz, Nicolo' Brandizzi, Mario Giulianelli, Sandro Pezzelle, Raquel Fernández
Inspired by psycholinguistic theories, we endow our speaker with the ability to adapt its referring expressions via a simulation module that monitors the effectiveness of planned utterances from the listener's perspective.
1 code implementation • 19 May 2023 • Mario Giulianelli, Joris Baan, Wilker Aziz, Raquel Fernández, Barbara Plank
In Natural Language Generation (NLG) tasks, for any input, multiple communicative goals are plausible, and any goal can be put into words, or produced, in multiple ways.
1 code implementation • 28 Oct 2022 • Joris Baan, Wilker Aziz, Barbara Plank, Raquel Fernández
Calibration is a popular framework to evaluate whether a classifier knows when it does not know - i. e., its predictive probabilities are a good indication of how likely a prediction is to be correct.
1 code implementation • 15 Oct 2022 • Mario Giulianelli, Arabella Sinclair, Raquel Fernández
We hypothesise that speakers use construction repetition to mitigate information rate, leading to an overall decrease in utterance information content over the course of a dialogue.
1 code implementation • 30 Sep 2021 • Arabella Sinclair, Jaap Jumelet, Willem Zuidema, Raquel Fernández
We investigate the extent to which modern, neural language models are susceptible to structural priming, the phenomenon whereby the structure of a sentence makes the same structure more probable in a follow-up sentence.
no code implementations • COLING 2020 • Marco del Tredici, Raquel Fernández
Cognitive and social traits of individuals are reflected in language use.
no code implementations • EMNLP 2020 • Ece Takmaz, Mario Giulianelli, Sandro Pezzelle, Arabella Sinclair, Raquel Fernández
We propose a generation model that produces referring utterances grounded in both the visual and the conversational context.
1 code implementation • EMNLP 2020 • Ece Takmaz, Sandro Pezzelle, Lisa Beinborn, Raquel Fernández
When speakers describe an image, they tend to look at objects before mentioning them.
1 code implementation • ACL 2020 • Mario Giulianelli, Marco del Tredici, Raquel Fernández
This paper presents the first unsupervised approach to lexical semantic change that makes use of contextualised word representations.
no code implementations • IJCNLP 2019 • Marco Del Tredici, Diego Marcheggiani, Sabine Schulte im Walde, Raquel Fernández
Information about individuals can help to better understand what they say, particularly in social media where texts are short.
no code implementations • 27 Aug 2019 • Sandro Pezzelle, Raquel Fernández
This work aims at modeling how the meaning of gradable adjectives of size (`big', `small') can be learned from visually-grounded contexts.
no code implementations • ACL 2019 • Claudio Greco, Barbara Plank, Raquel Fernández, Raffaella Bernardi
We study the issue of catastrophic forgetting in the context of neural multimodal approaches to Visual Question Answering (VQA).
no code implementations • ACL 2019 • Janosch Haber, Tim Baumgärtner, Ece Takmaz, Lieke Gelderloos, Elia Bruni, Raquel Fernández
This paper introduces the PhotoBook dataset, a large-scale collection of visually-grounded, task-oriented dialogues in English designed to investigate shared dialogue history accumulating during conversation.
no code implementations • WS 2019 • Ravi Shekhar, Ece Takmaz, Raquel Fernández, Raffaella Bernardi
The multimodal models used in the emerging field at the intersection of computational linguistics and computer vision implement the bottom-up processing of the `Hub and Spoke' architecture proposed in cognitive science to represent how the brain processes and combines multi-sensory inputs.
no code implementations • WS 2018 • Yujie Xing, Raquel Fernández
Stylistic variation is critical to render the utterances generated by conversational agents natural and engaging.
3 code implementations • NAACL 2019 • Ravi Shekhar, Aashish Venkatesh, Tim Baumgärtner, Elia Bruni, Barbara Plank, Raffaella Bernardi, Raquel Fernández
We compare our approach to an alternative system which extends the baseline with reinforcement learning.
1 code implementation • NAACL 2019 • Marco Del Tredici, Raquel Fernández, Gemma Boleda
We present the first exploration of meaning shift over short periods of time in online communities using distributional representations.
no code implementations • WS 2018 • Dieuwke Hupkes, Sanne Bouwmeester, Raquel Fernández
We investigate how encoder-decoder models trained on a synthetic dataset of task-oriented dialogues process disfluencies, such as hesitations and self-corrections.
no code implementations • WS 2017 • Marco Del Tredici, Raquel Fernández
We introduce a framework for quantifying semantic variation of common words in Communities of Practice and in sets of topic-related communities.
no code implementations • COLING 2018 • Marco Del Tredici, Raquel Fernández
We investigate the birth and diffusion of lexical innovations in a large dataset of online social communities.
1 code implementation • 12 May 2018 • Filip Klubička, Raquel Fernández
As research on hate speech becomes more and more relevant every day, most of it is still focused on hate speech detection.
2 code implementations • ACL 2016 • Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández
We introduce LAMBADA, a dataset to evaluate the capabilities of computational models for text understanding by means of a word prediction task.