1 code implementation • COLING (CogALex) 2020 • Jinbiao Yang, Stefan L. Frank, Antal Van den Bosch
Language users process utterances by segmenting them into many cognitive units, which vary in their sizes and linguistic levels.
1 code implementation • 14 Mar 2022 • Danny Merkx, Sebastiaan Scholten, Stefan L. Frank, Mirjam Ernestus, Odette Scharenborg
We furthermore investigate whether vector quantisation, a technique for discrete representation learning, aids the model in the discovery and recognition of words.
1 code implementation • CMCL (ACL) 2022 • Danny Merkx, Stefan L. Frank, Mirjam Ernestus
In this paper we create visually grounded word embeddings by combining English text and images and compare them to popular text-based methods, to see if visual information allows our model to better capture cognitive aspects of word meaning.
1 code implementation • 16 Jun 2021 • Danny Merkx, Stefan L. Frank, Mirjam Ernestus
This study addresses the question whether visually grounded speech recognition (VGS) models learn to capture sentence semantics without access to any prior linguistic knowledge.
1 code implementation • NAACL (CMCL) 2021 • Danny Merkx, Stefan L. Frank
Recurrent neural networks (RNNs) have long been an architecture of interest for computational models of human sentence processing.
1 code implementation • 9 Sep 2019 • Danny Merkx, Stefan L. Frank, Mirjam Ernestus
Humans learn language by interaction with their environment and listening to other humans.
no code implementations • WS 2019 • Aless Lopopolo, ro, Stefan L. Frank, Antal Van den Bosch, Roel Willems
Backward saccades during reading have been hypothesized to be involved in structural reanalysis, or to be related to the level of text difficulty.
no code implementations • WS 2019 • Chara Tsoukala, Stefan L. Frank, Antal Van den Bosch, Jorge Vald{\'e}s Kroff, Mirjam Broersma
To our knowledge, this is the first computational cognitive model that aims to simulate code-switched sentence production.
no code implementations • ACL 2017 • Tomer Cagan, Stefan L. Frank, Reut Tsarfaty
Opinionated Natural Language Generation (ONLG) is a new, challenging, task that aims to automatically generate human-like, subjective, responses to opinionated articles online.