no code implementations • 9 Jun 2023 • Kinan Martin, Jon Gauthier, Canaan Breiss, Roger Levy
Textless self-supervised speech models have grown in capabilities in recent years, but the nature of the linguistic information they encode has not yet been thoroughly examined.
no code implementations • 22 May 2023 • Jon Gauthier, Roger Levy
We fit this model to explain scalp EEG signals recorded as subjects passively listened to a fictional story, revealing both the dynamics of the online auditory word recognition process and the neural correlates of the recognition and integration of words.
no code implementations • 18 Dec 2022 • Koustuv Sinha, Jon Gauthier, Aaron Mueller, Kanishka Misra, Keren Fuentes, Roger Levy, Adina Williams
In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality.
no code implementations • ACL 2020 • Jon Gauthier, Jennifer Hu, Ethan Wilcox, Peng Qian, Roger Levy
Targeted syntactic evaluations have yielded insights into the generalizations learned by neural network language models.
1 code implementation • 2 Jun 2020 • Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, Roger Levy
Human reading behavior is tuned to the statistics of natural language: the time it takes human subjects to read a word can be predicted from estimates of the word's probability in context.
1 code implementation • ACL 2020 • Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, Roger P. Levy
While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.
1 code implementation • IJCNLP 2019 • Jon Gauthier, Roger Levy
Through further task ablations and representational analyses, we find that tasks which produce syntax-light representations yield significant improvements in brain decoding performance.
1 code implementation • 2 Jun 2018 • Jon Gauthier, Anna Ivanova
Language decoding studies have identified word representations which can be used to predict brain activity in response to novel words and sentences (Anderson et al., 2016; Pereira et al., 2018).
no code implementations • 14 May 2018 • Jon Gauthier, Roger Levy, Joshua B. Tenenbaum
Children learning their first language face multiple problems of induction: how to learn the meanings of words, and how to build meaningful phrases from those words according to syntactic rules.
1 code implementation • WS 2017 • Li Lucy, Jon Gauthier
Distributional word representation methods exploit word co-occurrences to build compact vector encodings of words.
no code implementations • 12 Oct 2016 • Jon Gauthier, Igor Mordatch
A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts.
3 code implementations • ACL 2016 • Samuel R. Bowman, Jon Gauthier, Abhinav Rastogi, Raghav Gupta, Christopher D. Manning, Christopher Potts
Tree-structured neural networks exploit valuable syntactic parse information as they interpret the meanings of sentences.
Ranked #86 on Natural Language Inference on SNLI