no code implementations • 7 Jul 2023 • Ethan Gotlieb Wilcox, Tiago Pimentel, Clara Meister, Ryan Cotterell, Roger P. Levy
We address this gap in the current literature by investigating the relationship between surprisal and reading times in eleven different languages, distributed across five language families.
1 code implementation • 18 Dec 2022 • Songlin Yang, Roger P. Levy, Yoon Kim
We study grammar induction with mildly context-sensitive grammars for unsupervised discontinuous parsing.
1 code implementation • 17 Nov 2022 • Tiwalayo Eisape, Vineet Gangireddy, Roger P. Levy, Yoon Kim
This suggests implicit incremental syntactic inferences underlie next-word predictions in autoregressive neural language models.
no code implementations • 15 Jun 2022 • Stephan C. Meylan, Ruthe Foushee, Nicole H. Wong, Elika Bergelson, Roger P. Levy
Children's early speech often bears little resemblance to that of adults, and yet parents and other caregivers are able to interpret that speech and react accordingly.
no code implementations • NeurIPS 2021 • Jiayuan Mao, Haoyue Shi, Jiajun Wu, Roger P. Levy, Joshua B. Tenenbaum
We present Grammar-Based Grounded Lexicon Learning (G2L2), a lexicalist approach toward learning a compositional and grounded meaning representation of language from grounded data, such as paired images and texts.
1 code implementation • 6 Jun 2021 • Ethan Gotlieb Wilcox, Pranali Vani, Roger P. Levy
We present a targeted, scaled-up comparison of incremental processing in humans and neural language models by collecting by-word reaction time data for sixteen different syntactic test suites across a range of structural phenomena.
no code implementations • 6 Feb 2021 • Stephan C. Meylan, Ruthe Foushee, Elika Bergelson, Roger P. Levy
How do adults understand children's speech?
no code implementations • 13 May 2020 • Noga Zaslavsky, Jennifer Hu, Roger P. Levy
What computational principles underlie human pragmatic reasoning?
1 code implementation • ACL 2020 • Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, Roger P. Levy
While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.
1 code implementation • WS 2019 • Richard Futrell, Roger P. Levy
We collect human acceptability ratings for our stimuli, in the first acceptability judgment experiment directly manipulating the predictors of syntactic alternations.
no code implementations • NeurIPS 2008 • Roger P. Levy, Florencia Reali, Thomas L. Griffiths
Language comprehension in humans is significantly constrained by memory, yet rapid, highly incremental, and capable of utilizing a wide range of contextual information to resolve ambiguity and form expectations about future input.