Search Results for author: Lieke Gelderloos

Found 6 papers, 2 papers with code

Discrete representations in neural models of spoken language

1 code implementation EMNLP (BlackboxNLP) 2021 Bertrand Higy, Lieke Gelderloos, Afra Alishahi, Grzegorz Chrupała

The distributed and continuous representations used by neural networks are at odds with representations employed in linguistics, which are typically symbolic.

Attribute Quantization

Learning to Understand Child-directed and Adult-directed Speech

no code implementations ACL 2020 Lieke Gelderloos, Grzegorz Chrupała, Afra Alishahi

Speech directed to children differs from adult-directed speech in linguistic aspects such as repetition, word choice, and sentence length, as well as in aspects of the speech signal itself, such as prosodic and phonemic variation.

Language Acquisition Sentence

The PhotoBook Dataset: Building Common Ground through Visually-Grounded Dialogue

no code implementations ACL 2019 Janosch Haber, Tim Baumgärtner, Ece Takmaz, Lieke Gelderloos, Elia Bruni, Raquel Fernández

This paper introduces the PhotoBook dataset, a large-scale collection of visually-grounded, task-oriented dialogues in English designed to investigate shared dialogue history accumulating during conversation.

On the difficulty of a distributional semantics of spoken language

no code implementations WS 2019 Grzegorz Chrupała, Lieke Gelderloos, Ákos Kádár, Afra Alishahi

In the domain of unsupervised learning most work on speech has focused on discovering low-level constructs such as phoneme inventories or word-like units.

Representations of language in a model of visually grounded speech signal

2 code implementations ACL 2017 Grzegorz Chrupała, Lieke Gelderloos, Afra Alishahi

We present a visually grounded model of speech perception which projects spoken utterances and images to a joint semantic space.

From phonemes to images: levels of representation in a recurrent neural model of visually-grounded language learning

no code implementations COLING 2016 Lieke Gelderloos, Grzegorz Chrupała

We present a model of visually-grounded language learning based on stacked gated recurrent neural networks which learns to predict visual features given an image description in the form of a sequence of phonemes.

Grounded language learning

Cannot find the paper you are looking for? You can Submit a new open access paper.