1 code implementation • EMNLP (BlackboxNLP) 2020 • Hila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav Goldberg
Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages.
2 code implementations • 11 Apr 2024 • Anton Schäfer, Shauli Ravfogel, Thomas Hofmann, Tiago Pimentel, Imanol Schlag
In controlled experiments on perfectly equivalent cloned languages, we observe that the existence of a predominant language during training boosts the performance of less frequent languages and leads to stronger alignment of model representations across languages.
no code implementations • 17 Feb 2024 • Matan Avitan, Ryan Cotterell, Yoav Goldberg, Shauli Ravfogel
Interventions targeting the representation space of language models (LMs) have emerged as effective means to influence model behavior.
no code implementations • 15 Feb 2024 • Shashwat Singh, Shauli Ravfogel, Jonathan Herzig, Roee Aharoni, Ryan Cotterell, Ponnurangam Kumaraguru
We demonstrate the effectiveness of the proposed approaches in mitigating bias in multiclass classification and in reducing the generation of toxic language, outperforming strong baselines.
1 code implementation • 24 Oct 2023 • Mosh Levy, Shauli Ravfogel, Yoav Goldberg
Using GPT4 as the editor, we find it can successfully edit trigger shortcut in samples that fool LLMs.
1 code implementation • 18 Oct 2023 • Aviv Slobodkin, Omer Goldman, Avi Caciularu, Ido Dagan, Shauli Ravfogel
In this paper, we explore the behavior of LLMs when presented with (un)answerable queries.
1 code implementation • NeurIPS 2023 • Royi Rassin, Eran Hirsch, Daniel Glickman, Shauli Ravfogel, Yoav Goldberg, Gal Chechik
This reflects an impaired mapping between linguistic binding of entities and modifiers in the prompt and visual binding of the corresponding elements in the generated image.
1 code implementation • NeurIPS 2023 • Nora Belrose, David Schneider-Joseph, Shauli Ravfogel, Ryan Cotterell, Edward Raff, Stella Biderman
Concept erasure aims to remove specified features from a representation.
1 code implementation • 26 May 2023 • Marius Mosbach, Tiago Pimentel, Shauli Ravfogel, Dietrich Klakow, Yanai Elazar
In this paper, we compare the generalization of few-shot fine-tuning and in-context learning to challenge datasets, while controlling for the models used, the number of examples, and the number of parameters, ranging from 125M to 30B.
1 code implementation • 23 May 2023 • Yuxin Ren, Qipeng Guo, Zhijing Jin, Shauli Ravfogel, Mrinmaya Sachan, Bernhard Schölkopf, Ryan Cotterell
Transformer models bring propelling advances in various NLP tasks, thus inducing lots of interpretability research on the learned representations of the models.
no code implementations • 21 May 2023 • Shauli Ravfogel, Valentina Pyatkin, Amir DN Cohen, Avshalom Manevich, Yoav Goldberg
Identifying texts with a given semantics is central for many information seeking scenarios.
no code implementations • 4 May 2023 • Shauli Ravfogel, Yoav Goldberg, Jacob Goldberger
Language models generate text based on successively sampling the next word.
no code implementations • 19 Oct 2022 • Royi Rassin, Shauli Ravfogel, Yoav Goldberg
We study the way DALLE-2 maps symbols (words) in the prompt to their references (entities or properties of entities in the generated image).
no code implementations • 18 Oct 2022 • Shauli Ravfogel, Yoav Goldberg, Ryan Cotterell
Methods for erasing human-interpretable concepts from neural representations that assume linearity have been found to be tractable and useful.
no code implementations • 17 Aug 2022 • Rita Sevastjanova, Eren Cakmak, Shauli Ravfogel, Ryan Cotterell, Mennatallah El-Assady
The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces.
no code implementations • 28 Jul 2022 • Yanai Elazar, Nora Kassner, Shauli Ravfogel, Amir Feder, Abhilasha Ravichander, Marius Mosbach, Yonatan Belinkov, Hinrich Schütze, Yoav Goldberg
Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models.
1 code implementation • RepL4NLP (ACL) 2022 • Hila Gonen, Shauli Ravfogel, Yoav Goldberg
Multilingual language models were shown to allow for nontrivial transfer across scripts and languages.
2 code implementations • 28 Jan 2022 • Shauli Ravfogel, Michael Twiton, Yoav Goldberg, Ryan Cotterell
Modern neural models trained on textual data rely on pre-trained representations that emerge without direct supervision.
1 code implementation • 28 Jan 2022 • Shauli Ravfogel, Francisco Vargas, Yoav Goldberg, Ryan Cotterell
One prominent approach for the identification of concepts in neural representations is searching for a linear subspace whose erasure prevents the prediction of the concept from the representations.
5 code implementations • ACL 2022 • Elad Ben Zaken, Shauli Ravfogel, Yoav Goldberg
We introduce BitFit, a sparse-finetuning method where only the bias-terms of the model (or a subset of them) are being modified.
no code implementations • ACL 2021 • Shauli Ravfogel, Hillel Taub-Tabib, Yoav Goldberg
We advocate for a search paradigm called ``extractive search'', in which a search query is enriched with capture-slots, to allow for such rapid extraction.
no code implementations • CoNLL (EMNLP) 2021 • Shauli Ravfogel, Grusha Prasad, Tal Linzen, Yoav Goldberg
We apply this method to study how BERT models of different sizes process relative clauses (RCs).
1 code implementation • EMNLP 2021 • Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, Yoav Goldberg
Our method is based on projecting model representation to a latent space that captures only the features that are useful (to the model) to differentiate two potential decisions.
1 code implementation • 1 Feb 2021 • Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg
In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge?
1 code implementation • 16 Oct 2020 • Hila Gonen, Shauli Ravfogel, Yanai Elazar, Yoav Goldberg
Recent works have demonstrated that multilingual BERT (mBERT) learns rich cross-lingual representations, that allow for transfer across languages.
no code implementations • EMNLP (insights) 2020 • Yanai Elazar, Victoria Basmov, Shauli Ravfogel, Yoav Goldberg, Reut Tsarfaty
In this work, we follow known methodologies of collecting labeled data for the complement coercion phenomenon.
1 code implementation • EMNLP (BlackboxNLP) 2020 • Shauli Ravfogel, Yanai Elazar, Jacob Goldberger, Yoav Goldberg
Contextualized word representations, such as ELMo and BERT, were shown to perform well on various semantic and syntactic tasks.
no code implementations • 1 Jun 2020 • Yanai Elazar, Shauli Ravfogel, Alon Jacovi, Yoav Goldberg
In this work, we point out the inability to infer behavioral conclusions from probing results and offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
2 code implementations • ACL 2020 • Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, Yoav Goldberg
The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models.
2 code implementations • NAACL 2021 • Carlo Meloni, Shauli Ravfogel, Yoav Goldberg
Historical linguists have identified regularities in the process of historic sound change.
2 code implementations • NAACL 2019 • Shauli Ravfogel, Yoav Goldberg, Tal Linzen
How do typological properties such as word order and morphological case marking affect the ability of neural sequence models to acquire the syntax of a language?
no code implementations • WS 2018 • Shauli Ravfogel, Francis M. Tyers, Yoav Goldberg
We propose the Basque agreement prediction task as challenging benchmark for models that attempt to learn regularities in human language.