Search Results for author: Rochelle Choenni

Found 14 papers, 5 papers with code

Investigating Language Relationships in Multilingual Sentence Encoders Through the Lens of Linguistic Typology

no code implementations CL (ACL) 2022 Rochelle Choenni, Ekaterina Shutova

The results provide insight into their information-sharing mechanisms and suggest that these linguistic properties are encoded jointly across typologically similar languages in these models.

Sentence XLM-R

Metaphor Understanding Challenge Dataset for LLMs

no code implementations18 Mar 2024 Xiaoyu Tong, Rochelle Choenni, Martha Lewis, Ekaterina Shutova

Metaphor understanding is therefore an essential task for large language models (LLMs).

Examining Modularity in Multilingual LMs via Language-Specialized Subnetworks

no code implementations14 Nov 2023 Rochelle Choenni, Ekaterina Shutova, Dan Garrette

Recent work has proposed explicitly inducing language-wise modularity in multilingual LMs via sparse fine-tuning (SFT) on per-language subnetworks as a means of better guiding cross-lingual sharing.

Do large language models solve verbal analogies like children do?

no code implementations31 Oct 2023 Claire E. Stevenson, Mathilde ter Veen, Rochelle Choenni, Han L. J. van der Maas, Ekaterina Shutova

We conclude that the LLMs we tested indeed tend to solve verbal analogies by association with C like children do.

Probing LLMs for Joint Encoding of Linguistic Categories

1 code implementation28 Oct 2023 Giulio Starace, Konstantinos Papakostas, Rochelle Choenni, Apostolos Panagiotopoulos, Matteo Rosati, Alina Leidinger, Ekaterina Shutova

Large Language Models (LLMs) exhibit impressive performance on a range of NLP tasks, due to the general-purpose linguistic knowledge acquired during pretraining.

POS

How do languages influence each other? Studying cross-lingual data sharing during LLM fine-tuning

no code implementations22 May 2023 Rochelle Choenni, Dan Garrette, Ekaterina Shutova

We further study how different fine-tuning languages influence model performance on a given test language and find that they can both reinforce and complement the knowledge acquired from data of the test language itself.

Zero-Shot Cross-Lingual Transfer

Data-Efficient Cross-Lingual Transfer with Language-Specific Subnetworks

no code implementations31 Oct 2022 Rochelle Choenni, Dan Garrette, Ekaterina Shutova

Large multilingual language models typically share their parameters across all languages, which enables cross-lingual task transfer, but learning can also be hindered when training updates from different languages are in conflict.

Cross-Lingual Transfer Meta-Learning

What does it mean to be language-agnostic? Probing multilingual sentence encoders for typological properties

no code implementations27 Sep 2020 Rochelle Choenni, Ekaterina Shutova

Multilingual sentence encoders have seen much success in cross-lingual model transfer for downstream NLP tasks.

Sentence XLM-R

Blackbox Meets Blackbox: Representational Similarity \& Stability Analysis of Neural Language Models and Brains

1 code implementation WS 2019 Samira Abnar, Lisa Beinborn, Rochelle Choenni, Willem Zuidema

In this paper, we define and apply representational stability analysis (ReStA), an intuitive way of analyzing neural language models.

Blackbox meets blackbox: Representational Similarity and Stability Analysis of Neural Language Models and Brains

1 code implementation4 Jun 2019 Samira Abnar, Lisa Beinborn, Rochelle Choenni, Willem Zuidema

In this paper, we define and apply representational stability analysis (ReStA), an intuitive way of analyzing neural language models.

Semantic Drift in Multilingual Representations

1 code implementation CL (ACL) 2020 Lisa Beinborn, Rochelle Choenni

We propose to conduct an adapted version of representational similarity analysis of a selected set of concepts in computational multilingual representations.

Sentence

Robust Evaluation of Language-Brain Encoding Experiments

1 code implementation4 Apr 2019 Lisa Beinborn, Samira Abnar, Rochelle Choenni

Language-brain encoding experiments evaluate the ability of language models to predict brain responses elicited by language stimuli.

Cannot find the paper you are looking for? You can Submit a new open access paper.