no code implementations • NAACL (CMCL) 2021 • Nora Hollenstein, Emmanuele Chersoni, Cassandra L. Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
The goal of the task is to predict 5 different token- level eye-tracking metrics of the Zurich Cognitive Language Processing Corpus (ZuCo).
no code implementations • CMCL (ACL) 2022 • Nora Hollenstein, Emmanuele Chersoni, Cassandra Jacobs, Yohei Oseki, Laurent Prévot, Enrico Santus
We present the second shared task on eye-tracking data prediction of the Cognitive Modeling and Computational Linguistics Workshop (CMCL).
no code implementations • 20 Feb 2024 • Ryo Yoshida, Taiga Someya, Yohei Oseki
Large Language Models (LLMs) have achieved remarkable success thanks to scalability on large text corpora, but have some drawback in training efficiency.
no code implementations • 19 Feb 2024 • Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin
This also showcases the advantage of cognitively-motivated LMs, which are typically employed in cognitive modeling, in the computational simulation of language universals.
1 code implementation • 13 Nov 2023 • Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin
In other words, pure next-word probability remains a strong predictor for human reading behavior, even in the age of LLMs.
2 code implementations • 22 Sep 2023 • Taiga Someya, Yushi Sugimoto, Yohei Oseki
In this paper, we introduce JCoLA (Japanese Corpus of Linguistic Acceptability), which consists of 10, 020 sentences annotated with binary acceptability judgments.
1 code implementation • 24 Oct 2022 • Ryo Yoshida, Yohei Oseki
In this paper, we propose a novel architecture called Composition Attention Grammars (CAGs) that recursively compose subtrees into a single vector representation with a composition function, and selectively attend to previous structural information with a self-attention mechanism.
1 code implementation • 23 May 2022 • Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui
Language models (LMs) have been used in cognitive modeling as well as engineering studies -- they compute information-theoretic complexity metrics that simulate humans' cognitive load during reading.
2 code implementations • EMNLP 2021 • Ryo Yoshida, Hiroshi Noji, Yohei Oseki
In computational linguistics, it has been shown that hierarchical structures make language models (LMs) more human-like.
1 code implementation • ACL 2021 • Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui
Overall, our results suggest that a cross-lingual evaluation will be necessary to construct human-like computational models.
1 code implementation • Findings (ACL) 2021 • Hiroshi Noji, Yohei Oseki
However, RNNGs are known to be harder to scale due to the difficulty of batched training.
no code implementations • LREC 2020 • Yohei Oseki, Masayuki Asahara
Importantly, this inter-fertilization between NLP, on one hand, and the cognitive (neuro)science of language, on the other, has been driven by the language resources annotated with human language processing data.
no code implementations • WS 2019 • Yohei Oseki, Yasutada Sudo, Hiromu Sakai, Alec Marantz
Previous {``}wug{''} tests (Berko, 1958) on Japanese verbal inflection have demonstrated that Japanese speakers, both adults and children, cannot inflect novel present tense forms to {``}correct{''} past tense forms predicted by rules of existent verbs (de Chene, 1982; Vance, 1987, 1991; Klafehn, 2003, 2013), indicating that Japanese verbs are merely stored in the mental lexicon.
no code implementations • WS 2019 • Yohei Oseki, Charles Yang, Alec Marantz
Sentences are represented as hierarchical syntactic structures, which have been successfully modeled in sentence processing.