no code implementations • 13 Nov 2023 • Dota Tianai Dong, Mariya Toneva
Using brain recordings of participants watching a popular TV show, we analyze the effects of multi-modal connections and interactions in a pre-trained multi-modal video transformer on the alignment with uni- and multi-modal brain regions.
no code implementations • 8 Nov 2023 • Subba Reddy Oota, Emin Çelik, Fatma Deniz, Mariya Toneva
We investigate this question via a direct approach, in which we eliminate information related to specific low-level stimulus features (textual, speech, and visual) in the language model representations, and observe how this intervention affects the alignment with fMRI brain recordings acquired while participants read versus listened to the same naturalistic stories.
no code implementations • 7 Nov 2023 • Ruchit Rawal, Mariya Toneva
Possessing a wide variety of invariances may be a key reason for the recent successes of large language models, and our framework can shed light on the types of invariances that are retained by or emerge in new models.
no code implementations • 18 Oct 2023 • Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert Müller, Mariya Toneva, Thomas L. Griffiths
Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields.
no code implementations • 12 Jul 2023 • Gabriele Merlin, Vedant Nanda, Ruchit Rawal, Mariya Toneva
The pretrain-finetune paradigm usually improves downstream performance over training a model from scratch on the same task, becoming commonplace across many areas of machine learning.
no code implementations • 30 May 2023 • Camila Kolling, Till Speicher, Vedant Nanda, Mariya Toneva, Krishna P. Gummadi
Concretely, we show how PNKA can be leveraged to develop a deeper understanding of (a) the input examples that are likely to be misclassified, (b) the concepts encoded by (individual) neurons in a layer, and (c) the effects of fairness interventions on learned representations.
no code implementations • 24 Jan 2023 • Sebastian Michelmann, Manoj Kumar, Kenneth A. Norman, Mariya Toneva
In the future, GPT-3 may thereby help to elucidate the principles underlying human event perception.
2 code implementations • 21 Dec 2022 • Khai Loong Aw, Mariya Toneva
We show that training language models for deeper narrative understanding results in richer representations that have improved alignment to human brain activity.
no code implementations • 1 Dec 2022 • Gabriele Merlin, Mariya Toneva
The first perturbation is to improve the model's ability to predict the next word in the specific naturalistic stimulus text that the brain recordings correspond to.
1 code implementation • 21 Feb 2022 • Mariya Toneva, Jennifer Williams, Anand Bollu, Christoph Dann, Leila Wehbe
It is then natural to ask: "Is the activity in these different brain zones caused by the stimulus properties in the same way?"
no code implementations • 23 Aug 2021 • Peer Herholz, Eddy Fortier, Mariya Toneva, Nicolas Farrugia, Leila Wehbe, Valentina Borghesani
Real-world generalization, e. g., deciding to approach a never-seen-before animal, relies on contextual information as well as previous experiences.
no code implementations • 29 Jan 2021 • Mostafa Abdou, Ana Valeria Gonzalez, Mariya Toneva, Daniel Hershcovich, Anders Søgaard
We evaluate across two fMRI datasets whether language models align better with brain recordings, if their attention is biased by annotations from syntactic or semantic formalisms.
1 code implementation • NeurIPS 2020 • Mariya Toneva, Otilia Stretcu, Barnabas Poczos, Leila Wehbe, Tom M. Mitchell
These results suggest that only the end of semantic processing of a word is task-dependent, and pose a challenge for future research to formulate new hypotheses for earlier task effects as a function of the task and stimuli.
1 code implementation • NeurIPS 2019 • Dan Schwartz, Mariya Toneva, Leila Wehbe
Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain.
1 code implementation • NeurIPS 2019 • Mariya Toneva, Leila Wehbe
Our results reveal differences in the context-related representations across these models.
3 code implementations • ICLR 2019 • Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon
Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks.