no code implementations • 11 Mar 2024 • Grusha Prasad, Tal Linzen
Structural priming is a widely used psycholinguistic paradigm to study human sentence representations.
no code implementations • 30 Nov 2023 • Aryaman Chobey, Oliver Smith, Anzi Wang, Grusha Prasad
While some work has found that the surprisal estimates from these models can be used to predict a wide range of human neural and behavioral responses, other work studying more complex syntactic phenomena has found that these surprisal estimates generate incorrect behavioral predictions.
no code implementations • CoNLL (EMNLP) 2021 • Shauli Ravfogel, Grusha Prasad, Tal Linzen, Yoav Goldberg
We apply this method to study how BERT models of different sizes process relative clauses (RCs).
no code implementations • NAACL 2021 • Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, Adina Williams
We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking.
no code implementations • EMNLP (BlackboxNLP) 2021 • Grusha Prasad, Yixin Nie, Mohit Bansal, Robin Jia, Douwe Kiela, Adina Williams
Given the increasingly prominent role NLP models (will) play in our lives, it is important for human expectations of model behavior to align with actual model behavior.
1 code implementation • CONLL 2019 • Grusha Prasad, Marten Van Schijndel, Tal Linzen
Neural language models (LMs) perform well on tasks that require sensitivity to syntactic structure.