Search Results for author: Jon Gauthier

Found 12 papers, 6 papers with code

Probing self-supervised speech models for phonetic and phonemic information: a case study in aspiration

no code implementations9 Jun 2023 Kinan Martin, Jon Gauthier, Canaan Breiss, Roger Levy

Textless self-supervised speech models have grown in capabilities in recent years, but the nature of the linguistic information they encode has not yet been thoroughly examined.

The neural dynamics of auditory word recognition and integration

no code implementations22 May 2023 Jon Gauthier, Roger Levy

We fit this model to explain scalp EEG signals recorded as subjects passively listened to a fictional story, revealing both the dynamics of the online auditory word recognition process and the neural correlates of the recognition and integration of words.

EEG

Language model acceptability judgements are not always robust to context

no code implementations18 Dec 2022 Koustuv Sinha, Jon Gauthier, Aaron Mueller, Kanishka Misra, Keren Fuentes, Roger Levy, Adina Williams

In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality.

In-Context Learning Language Modelling +1

On the Predictive Power of Neural Language Models for Human Real-Time Comprehension Behavior

1 code implementation2 Jun 2020 Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, Roger Levy

Human reading behavior is tuned to the statistics of natural language: the time it takes human subjects to read a word can be predicted from estimates of the word's probability in context.

Open-Ended Question Answering

A Systematic Assessment of Syntactic Generalization in Neural Language Models

1 code implementation ACL 2020 Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, Roger P. Levy

While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge.

Language Modelling

Linking artificial and human neural representations of language

1 code implementation IJCNLP 2019 Jon Gauthier, Roger Levy

Through further task ablations and representational analyses, we find that tasks which produce syntax-light representations yield significant improvements in brain decoding performance.

Brain Decoding Natural Language Understanding +1

Does the brain represent words? An evaluation of brain decoding studies of language understanding

1 code implementation2 Jun 2018 Jon Gauthier, Anna Ivanova

Language decoding studies have identified word representations which can be used to predict brain activity in response to novel words and sentences (Anderson et al., 2016; Pereira et al., 2018).

Brain Decoding Sentence

Word learning and the acquisition of syntactic--semantic overhypotheses

no code implementations14 May 2018 Jon Gauthier, Roger Levy, Joshua B. Tenenbaum

Children learning their first language face multiple problems of induction: how to learn the meanings of words, and how to build meaningful phrases from those words according to syntactic rules.

Language Acquisition

A Paradigm for Situated and Goal-Driven Language Learning

no code implementations12 Oct 2016 Jon Gauthier, Igor Mordatch

A distinguishing property of human intelligence is the ability to flexibly use language in order to communicate complex ideas with other humans in a variety of contexts.

Cannot find the paper you are looking for? You can Submit a new open access paper.