Search Results for author: Marten Van Schijndel

Found 22 papers, 6 papers with code

Dual Mechanism Priming Effects in Hindi Word Order

no code implementations25 Oct 2022 Sidharth Ranjan, Marten Van Schijndel, Sumeet Agarwal, Rajakrishnan Rajkumar

By showing that different priming influences are separable from one another, our results support the hypothesis that multiple different cognitive mechanisms underlie priming.

Language Modelling Sentence

Discourse Context Predictability Effects in Hindi Word Order

no code implementations25 Oct 2022 Sidharth Ranjan, Marten Van Schijndel, Sumeet Agarwal, Rajakrishnan Rajkumar

While prior work has shown that a number of factors (e. g., information status, dependency length, and syntactic surprisal) influence Hindi word order preferences, the role of discourse predictability is underexplored in the literature.

Sentence

All Bark and No Bite: Rogue Dimensions in Transformer Language Models Obscure Representational Quality

1 code implementation EMNLP 2021 William Timkey, Marten Van Schijndel

Moreover, we find a striking mismatch between the dimensions that dominate similarity measures and those which are important to the behavior of the model.

To Point or Not to Point: Understanding How Abstractive Summarizers Paraphrase Text

no code implementations Findings (ACL) 2021 Matt Wilber, William Timkey, Marten Van Schijndel

Abstractive neural summarization models have seen great improvements in recent years, as shown by ROUGE scores of the generated summaries.

Abstractive Text Summarization

Uncovering Constraint-Based Behavior in Neural Models via Targeted Fine-Tuning

1 code implementation ACL 2021 Forrest Davis, Marten Van Schijndel

We show that competing processes in a language act as constraints on model behavior and demonstrate that targeted fine-tuning can re-weight the learned constraints, uncovering otherwise dormant linguistic knowledge in models.

Filler-gaps that neural networks fail to generalize

no code implementations CONLL 2020 Debasmita Bhattacharya, Marten Van Schijndel

We use cumulative priming to test for representational overlap between disparate filler-gap constructions in English and find evidence that the models learn a general representation for the existence of filler-gap dependencies.

Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment

1 code implementation ACL 2020 Forrest Davis, Marten van Schijndel

A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i. e. is a grammatical sentence more probable than an ungrammatical sentence).

Language Modelling Sentence +1

Quantity doesn't buy quality syntax with neural language models

no code implementations IJCNLP 2019 Marten van Schijndel, Aaron Mueller, Tal Linzen

We investigate to what extent these shortcomings can be mitigated by increasing the size of the network and the corpus on which it is trained.

Can Entropy Explain Successor Surprisal Effects in Reading?

no code implementations WS 2019 Marten van Schijndel, Tal Linzen

Human reading behavior is sensitive to surprisal: more predictable words tend to be read faster.

Language Modelling

A Neural Model of Adaptation in Reading

1 code implementation EMNLP 2018 Marten van Schijndel, Tal Linzen

It has been argued that humans rapidly adapt their lexical and syntactic expectations to match the statistics of the current linguistic context.

Language Modelling

Addressing surprisal deficiencies in reading time models

no code implementations WS 2016 Marten van Schijndel, William Schuler

This study demonstrates a weakness in how n-gram and PCFG surprisal are used to predict reading times in eye-tracking data.

Memory access during incremental sentence processing causes reading time latency

no code implementations WS 2016 Cory Shain, Marten Van Schijndel, Richard Futrell, Edward Gibson, William Schuler

Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.