Search Results for author: Roger P. Levy

Found 12 papers, 5 papers with code

Testing the Predictions of Surprisal Theory in 11 Languages

no code implementations7 Jul 2023 Ethan Gotlieb Wilcox, Tiago Pimentel, Clara Meister, Ryan Cotterell, Roger P. Levy

We address this gap in the current literature by investigating the relationship between surprisal and reading times in eleven different languages, distributed across five language families.

Probing for Incremental Parse States in Autoregressive Language Models

1 code implementation17 Nov 2022 Tiwalayo Eisape, Vineet Gangireddy, Roger P. Levy, Yoon Kim

This suggests implicit incremental syntactic inferences underlie next-word predictions in autoregressive neural language models.

Sentence

How Adults Understand What Young Children Say

no code implementations15 Jun 2022 Stephan C. Meylan, Ruthe Foushee, Nicole H. Wong, Elika Bergelson, Roger P. Levy

Children's early speech often bears little resemblance to that of adults, and yet parents and other caregivers are able to interpret that speech and react accordingly.

Bayesian Inference Language Acquisition

Grammar-Based Grounded Lexicon Learning

no code implementations NeurIPS 2021 Jiayuan Mao, Haoyue Shi, Jiajun Wu, Roger P. Levy, Joshua B. Tenenbaum

We present Grammar-Based Grounded Lexicon Learning (G2L2), a lexicalist approach toward learning a compositional and grounded meaning representation of language from grounded data, such as paired images and texts.

Network Embedding Sentence +1

A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans

1 code implementation6 Jun 2021 Ethan Gotlieb Wilcox, Pranali Vani, Roger P. Levy

We present a targeted, scaled-up comparison of incremental processing in humans and neural language models by collecting by-word reaction time data for sixteen different syntactic test suites across a range of structural phenomena.

Language Modelling Sentence

A Rate-Distortion view of human pragmatic reasoning

no code implementations13 May 2020 Noga Zaslavsky, Jennifer Hu, Roger P. Levy

What computational principles underlie human pragmatic reasoning?

A Systematic Assessment of Syntactic Generalization in Neural Language Models

1 code implementation ACL 2020 Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, Roger P. Levy

While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.

Language Modelling

Do RNNs learn human-like abstract word order preferences?

1 code implementation WS 2019 Richard Futrell, Roger P. Levy

We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations.

Language Modelling Sentence

Modeling the effects of memory on human online sentence processing with particle filters

no code implementations NeurIPS 2008 Roger P. Levy, Florencia Reali, Thomas L. Griffiths

Language comprehension in humans is significantly constrained by memory, yet rapid, highly incremental, and capable of utilizing a wide range of contextual information to resolve ambiguity and form expectations about future input.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.