no code implementations • NAACL (CMCL) 2021 • Paula Lissón, Dorothea Pregla, Dario Paape, Frank Burchert, Nicole Stadie, Shravan Vasishth
Several researchers have argued that sentence comprehension is mediated via a content-addressable retrieval mechanism that allows fast and direct access to memory items.
no code implementations • 9 Mar 2023 • Maximilian M. Rabe, Dario Paape, Daniela Mertzen, Shravan Vasishth, Ralf Engbert
Developing such an integrated model is extremely challenging and computationally demanding, but such an integration is an important step toward complete mathematical models of natural language comprehension in reading.
no code implementations • 10 Aug 2020 • Audrey Bürki, F. -Xavier Alario, Shravan Vasishth
Finally, we found that distractor word frequency and target word frequency interact; the effect of distractor frequency decreases as the frequency of the target word increases.
no code implementations • 14 Mar 2017 • Paul Mätzig, Shravan Vasishth, Felix Engelmann, David Caplan
We present a computational evaluation of three hypotheses about sources of deficit in sentence comprehension in aphasia: slowed processing, intermittent deficiency, and resource reduction.
no code implementations • 12 Mar 2017 • Shravan Vasishth, Lena A. Jäger, Bruno Nicenboim
One explanation for this facilitation effect is the feature percolation account: the plural feature on cabinets percolates up to the head noun key, leading to the illusion.
no code implementations • 2 Feb 2017 • Shravan Vasishth, Nicolas Chopin, Robin Ryder, Bruno Nicenboim
We present a case-study demonstrating the usefulness of Bayesian hierarchical mixture modelling for investigating cognitive processes.
no code implementations • 13 Dec 2016 • Bruno Nicenboim, Shravan Vasishth
We show that by introducing a modification of the activation model, i. e, by assuming that the accumulation of evidence for retrieval of incorrect items is not only slower but noisier (i. e., different variances for the correct and incorrect items), the model can provide a fit as good as the one of the direct access model.
no code implementations • 24 Oct 2016 • Thierry Poibeau, Shravan Vasishth
This special issue is dedicated to get a better picture of the relationships between computational linguistics and cognitive science.
1 code implementation • 5 Nov 2015 • Hannes Matuschek, Reinhold Kliegl, Shravan Vasishth, Harald Baayen, Douglas Bates
Linear mixed-effects models have increasingly replaced mixed-model analyses of variance for statistical inference in factorial psycholinguistic experiments.
Applications
1 code implementation • 20 Jun 2015 • Tanner Sorensen, Shravan Vasishth
With the arrival of the R packages nlme and lme4, linear mixed models (LMMs) have come to be widely used in experimentally-driven areas like psychology, linguistics, and cognitive science.
Methodology
1 code implementation • 16 Jun 2015 • Douglas Bates, Reinhold Kliegl, Shravan Vasishth, Harald Baayen
The analysis of experimental data with mixed-effects models requires decisions about the specification of the appropriate random-effects structure.
Methodology