no code implementations • CONLL 2020 • Aur{\'e}lie Herbelot
Many tasks are considered to be {`}solved{'} in the computational linguistics literature, but the corresponding algorithms operate in ways which are radically different from human cognition.
no code implementations • RANLP 2019 • Jelke Bloem, Antske Fokkens, Aur{\'e}lie Herbelot
Specifically, we inspect the behaviour of models using a pre-trained background space in learning.
1 code implementation • ACL 2019 • Gosse Minnema, Aur{\'e}lie Herbelot
Despite returning promising results, our experiments also demonstrate that much work remains to be done before distributional representations can reliably be predicted from brain data.
1 code implementation • ACL 2019 • Alex Kabbach, re, Kristina Gulordava, Aur{\'e}lie Herbelot
In this paper, we investigate the task of learning word embeddings from very sparse data in an incremental, cognitively-plausible way.
no code implementations • WS 2019 • Elizaveta Kuzmenko, Aur{\'e}lie Herbelot
There are two main aspects to this difference: a) DSMs are built over corpus data which may or may not reflect {`}what is in the world{'}; b) they are built from word co-occurrences, that is, from lexical types rather than entities and sets.
1 code implementation • COLING 2018 • Alex Kabbach, re, Corentin Ribeyre, Aur{\'e}lie Herbelot
Knowing the state-of-the-art for a particular task is an essential component of any computational linguistics investigation.
no code implementations • COLING 2016 • Aur{\'e}lie Herbelot, Ekaterina Kochmar
In this paper we discuss three key points related to error detection (ED) in learners{'} English.
no code implementations • COLING 2016 • Sebastian Pad{\'o}, Aur{\'e}lie Herbelot, Max Kisselew, Jan {\v{S}}najder
Compositional distributional semantic models (CDSMs) have successfully been applied to the task of predicting the meaning of a range of linguistic constructions.