no code implementations • 13 May 2024 • Sidharth Ranjan, Marten Van Schijndel
Our results suggest that, although there exists a preference for minimizing dependency length in non-canonical corpus sentences amidst the generated variants, this factor does not significantly contribute in identifying corpus sentences above and beyond surprisal and givenness measures.
no code implementations • 25 Oct 2022 • Sidharth Ranjan, Marten Van Schijndel, Sumeet Agarwal, Rajakrishnan Rajkumar
By showing that different priming influences are separable from one another, our results support the hypothesis that multiple different cognitive mechanisms underlie priming.
no code implementations • 25 Oct 2022 • Sidharth Ranjan, Marten Van Schijndel, Sumeet Agarwal, Rajakrishnan Rajkumar
While prior work has shown that a number of factors (e. g., information status, dependency length, and syntactic surprisal) influence Hindi word order preferences, the role of discourse predictability is underexplored in the literature.
1 code implementation • EMNLP 2021 • William Timkey, Marten Van Schijndel
Moreover, we find a striking mismatch between the dimensions that dominate similarity measures and those which are important to the behavior of the model.
no code implementations • Findings (ACL) 2021 • Matt Wilber, William Timkey, Marten Van Schijndel
Abstractive neural summarization models have seen great improvements in recent years, as shown by ROUGE scores of the generated summaries.
1 code implementation • ACL 2021 • Forrest Davis, Marten Van Schijndel
We show that competing processes in a language act as constraints on model behavior and demonstrate that targeted fine-tuning can re-weight the learned constraints, uncovering otherwise dormant linguistic knowledge in models.
no code implementations • CONLL 2020 • Debasmita Bhattacharya, Marten Van Schijndel
We use cumulative priming to test for representational overlap between disparate filler-gap constructions in English and find evidence that the models learn a general representation for the existence of filler-gap dependencies.
1 code implementation • CONLL 2020 • Forrest Davis, Marten Van Schijndel
Language models (LMs) trained on large quantities of text have been claimed to acquire abstract linguistic representations.
1 code implementation • ACL 2020 • Forrest Davis, Marten van Schijndel
A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i. e. is a grammatical sentence more probable than an ungrammatical sentence).
1 code implementation • CONLL 2019 • Grusha Prasad, Marten Van Schijndel, Tal Linzen
Neural language models (LMs) perform well on tasks that require sensitivity to syntactic structure.
no code implementations • IJCNLP 2019 • Marten van Schijndel, Aaron Mueller, Tal Linzen
We investigate to what extent these shortcomings can be mitigated by increasing the size of the network and the corpus on which it is trained.
no code implementations • WS 2019 • Marten van Schijndel, Tal Linzen
Human reading behavior is sensitive to surprisal: more predictable words tend to be read faster.
1 code implementation • EMNLP 2018 • Marten van Schijndel, Tal Linzen
It has been argued that humans rapidly adapt their lexical and syntactic expectations to match the statistics of the current linguistic context.
no code implementations • WS 2016 • Cory Shain, Marten Van Schijndel, Richard Futrell, Edward Gibson, William Schuler
Studies on the role of memory as a predictor of reading time latencies (1) differ in their predictions about when memory effects should occur in processing and (2) have had mixed results, with strong positive effects emerging from isolated constructed stimuli and weak or even negative effects emerging from naturally-occurring stimuli.
no code implementations • WS 2016 • Marten van Schijndel, William Schuler
This study demonstrates a weakness in how n-gram and PCFG surprisal are used to predict reading times in eye-tracking data.
no code implementations • WS 2015 • Marten van Schijndel, Brian Murphy, William Schuler
no code implementations • ACL 2014 • Marten van Schijndel, Micha Elsner
no code implementations • NAACL 2013 • Marten van Schijndel, William Schuler
no code implementations • WS 2012 • Marten van Schijndel, Andy Exley, William Schuler