Search Results for author: Evelina Fedorenko

Found 20 papers, 7 papers with code

SentSpace: Large-Scale Benchmarking and Evaluation of Text using Cognitively Motivated Lexical, Syntactic, and Semantic Features

no code implementations NAACL (ACL) 2022 Greta Tuckute, Aalok Sathe, Mingye Wang, Harley Yoder, Cory Shain, Evelina Fedorenko

The modular design of SentSpace allows researchersto easily integrate their own feature computation into the pipeline while benefiting from acommon framework for evaluation and visualization.

Benchmarking Sentence

Lexicon-Level Contrastive Visual-Grounding Improves Language Modeling

1 code implementation21 Mar 2024 Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas

Today's most accurate language models are trained on orders of magnitude more language data than human language learners receive - but with no supervision from other sensory modalities that play a crucial role in human learning.

Grounded language learning Language Modelling +2

Comparing Plausibility Estimates in Base and Instruction-Tuned Large Language Models

no code implementations21 Mar 2024 Carina Kauf, Emmanuele Chersoni, Alessandro Lenci, Evelina Fedorenko, Anna A. Ivanova

Experiment 1 shows that, across model architectures and plausibility datasets, (i) log likelihood ($\textit{LL}$) scores are the most reliable indicator of sentence plausibility, with zero-shot prompting yielding inconsistent and typically poor results; (ii) $\textit{LL}$-based performance is still inferior to human performance; (iii) instruction-tuned models have worse $\textit{LL}$-based performance than base models.

Sentence

Quantifying the redundancy between prosody and text

1 code implementation28 Nov 2023 Lukas Wolf, Tiago Pimentel, Evelina Fedorenko, Ryan Cotterell, Alex Warstadt, Ethan Wilcox, Tamar Regev

Using a large spoken corpus of English audiobooks, we extract prosodic features aligned to individual words and test how well they can be predicted from LLM embeddings, compared to non-contextual word embeddings.

Word Embeddings

Large language models implicitly learn to straighten neural sentence trajectories to construct a predictive representation of natural language

no code implementations5 Nov 2023 Eghbal A. Hosseini, Evelina Fedorenko

We quantify straightness using a 1-dimensional curvature metric, and present four findings in support of the trajectory straightening hypothesis: i) In trained models, the curvature decreases from the early to the deeper layers of the network.

Language Modelling Sentence

JOSA: Joint surface-based registration and atlas construction of brain geometry and function

no code implementations22 Oct 2023 Jian Li, Greta Tuckute, Evelina Fedorenko, Brian L. Edlow, Adrian V. Dalca, Bruce Fischl

By recognizing the mismatch between geometry and function, JOSA provides new insights into the future development of registration methods using joint analysis of the brain structure and function.

Visual Grounding Helps Learn Word Meanings in Low-Data Regimes

1 code implementation20 Oct 2023 Chengxu Zhuang, Evelina Fedorenko, Jacob Andreas

But to achieve these results, LMs must be trained in distinctly un-human-like ways - requiring orders of magnitude more language data than children receive during development, and without perceptual or social context.

Image Captioning Language Acquisition +5

Joint cortical registration of geometry and function using semi-supervised learning

no code implementations2 Mar 2023 Jian Li, Greta Tuckute, Evelina Fedorenko, Brian L. Edlow, Bruce Fischl, Adrian V. Dalca

Brain surface-based image registration, an important component of brain image analysis, establishes spatial correspondence between cortical surfaces.

Image Registration

Dissociating language and thought in large language models

no code implementations16 Jan 2023 Kyle Mahowald, Anna A. Ivanova, Idan A. Blank, Nancy Kanwisher, Joshua B. Tenenbaum, Evelina Fedorenko

Large Language Models (LLMs) have come closest among all models to date to mastering human language, yet opinions about their linguistic and cognitive capabilities remain split.

A fine-grained comparison of pragmatic language understanding in humans and language models

1 code implementation13 Dec 2022 Jennifer Hu, Sammy Floyd, Olessia Jouravlev, Evelina Fedorenko, Edward Gibson

We perform a fine-grained comparison of language models and humans on seven pragmatic phenomena, using zero-shot prompting on an expert-curated set of English materials.

Event knowledge in large language models: the gap between the impossible and the unlikely

1 code implementation2 Dec 2022 Carina Kauf, Anna A. Ivanova, Giulia Rambelli, Emmanuele Chersoni, Jingyuan Selena She, Zawad Chowdhury, Evelina Fedorenko, Alessandro Lenci

Overall, our results show that important aspects of event knowledge naturally emerge from distributional linguistic patterns, but also highlight a gap between representations of possible/impossible and likely/unlikely events.

Sentence World Knowledge

Beyond linear regression: mapping models in cognitive neuroscience should align with research goals

no code implementations23 Aug 2022 Anna A. Ivanova, Martin Schrimpf, Stefano Anzellotti, Noga Zaslavsky, Evelina Fedorenko, Leyla Isik

Moreover, we argue that, instead of categorically treating the mapping models as linear or nonlinear, we should instead aim to estimate the complexity of these models.

regression

Interpretability of artificial neural network models in artificial Intelligence vs. neuroscience

no code implementations7 Jun 2022 Kohitij Kar, Simon Kornblith, Evelina Fedorenko

Given the widespread calls to improve the interpretability of AI systems, we here highlight these different notions of interpretability and argue that the neuroscientific interpretability of ANNs can be pursued in parallel with, but independently from, the ongoing efforts in AI.

Decision Making

Grammatical cues to subjecthood are redundant in a majority of simple clauses across languages

no code implementations30 Jan 2022 Kyle Mahowald, Evgeniia Diachek, Edward Gibson, Evelina Fedorenko, Richard Futrell

The conclusion is that grammatical cues such as word order are necessary to convey subjecthood and objecthood in a minority of naturally occurring transitive clauses; nevertheless, they can (a) provide an important source of redundancy and (b) are crucial for conveying intended meaning that cannot be inferred from the words alone, including descriptions of human interactions, where roles are often reversible (e. g., Ray helped Lu/Lu helped Ray), and expressing non-prototypical meanings (e. g., "The bone chewed the dog.

Sentence World Knowledge

The neural architecture of language: Integrative modeling converges on predictive processing

1 code implementation Proceedings of the National Academy of Sciences 2021 Martin Schrimpf, Idan Blank, Greta Tuckute, Carina Kauf, Eghbal Hosseini, Nancy Kanwisher, Joshua Tenenbaum, Evelina Fedorenko

The neuroscience of perception has recently been revolutionized with an integrative modeling approach in which computation, brain function, and behavior are linked across many datasets and many computational models.

Language Modelling Probing Language Models +1

Representations of Computer Programs in the Human Brain

no code implementations29 Sep 2021 Shashank Srikant, Benjamin Lipkin, Anna A Ivanova, Evelina Fedorenko, Una-May O'Reilly

We find that the Multiple Demand system, a system of brain regions previously shown to respond to code, contains information about multiple specific code properties, as well as machine learned representations of code.

Semantic projection: recovering human knowledge of multiple, distinct object features from word embeddings

no code implementations5 Feb 2018 Gabriel Grand, Idan Asher Blank, Francisco Pereira, Evelina Fedorenko

Because related words appear in similar contexts, such spaces - called "word embeddings" - can be learned from patterns of lexical co-occurrences in natural language.

Word Embeddings

The Natural Stories Corpus

1 code implementation LREC 2018 Richard Futrell, Edward Gibson, Hal Tily, Idan Blank, Anastasia Vishnevetsky, Steven T. Piantadosi, Evelina Fedorenko

It is now a common practice to compare models of human language processing by predicting participant reactions (such as reading times) to corpora consisting of rich naturalistic linguistic materials.

Cannot find the paper you are looking for? You can Submit a new open access paper.