no code implementations • 1 Feb 2024 • Alon Jacovi, Yonatan Bitton, Bernd Bohnet, Jonathan Herzig, Or Honovich, Michael Tseng, Michael Collins, Roee Aharoni, Mor Geva
REVEAL includes comprehensive labels for the relevance, attribution to evidence passages, and logical correctness of each reasoning step in a language model's answer, across a variety of datasets and state-of-the-art language models.
no code implementations • 16 Oct 2023 • Alon Jacovi, Avi Caciularu, Jonathan Herzig, Roee Aharoni, Bernd Bohnet, Mor Geva
A growing area of research investigates augmenting language models with tools (e. g., search engines, calculators) to overcome their shortcomings (e. g., missing or incorrect knowledge, incorrect logical inferences).
no code implementations • 5 Oct 2023 • Tita A. Bach, Jenny K. Kristiansen, Aleksandar Babic, Alon Jacovi
We divided our investigation into the following research areas: (1) terms used to describe HAII, (2) primary roles of AI-enabled systems, (3) factors that influence HAII, and (4) how HAII is measured.
1 code implementation • 17 May 2023 • Alon Jacovi, Avi Caciularu, Omer Goldman, Yoav Goldberg
Data contamination has become prevalent and challenging with the rise of models pretrained on large automatically-crawled corpora.
1 code implementation • 4 May 2023 • Alon Jacovi, Hendrik Schuff, Heike Adel, Ngoc Thang Vu, Yoav Goldberg
Word-level saliency explanations ("heat maps over words") are often used to communicate feature-attribution in text-based models.
1 code implementation • 13 Jan 2023 • Alon Jacovi
The XAI literature is decentralized, both in terminology and in publication venues, but recent years saw the community converge around keywords that make it possible to more reliably discover papers automatically.
1 code implementation • 27 Jan 2022 • Hendrik Schuff, Alon Jacovi, Heike Adel, Yoav Goldberg, Ngoc Thang Vu
In this work, we focus on this question through a study of saliency-based explanations over textual data.
no code implementations • 27 Jan 2022 • Alon Jacovi, Jasmijn Bastings, Sebastian Gehrmann, Yoav Goldberg, Katja Filippova
We posit that folk concepts of behavior provide us with a "language" that humans understand behavior with.
1 code implementation • EMNLP 2021 • Alon Jacovi, Swabha Swayamdipta, Shauli Ravfogel, Yanai Elazar, Yejin Choi, Yoav Goldberg
Our method is based on projecting model representation to a latent space that captures only the features that are useful (to the model) to differentiate two potential decisions.
no code implementations • 15 Oct 2020 • Alon Jacovi, Ana Marasović, Tim Miller, Yoav Goldberg
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i. e., trust between people).
1 code implementation • EMNLP 2020 • Shachar Rosenman, Alon Jacovi, Yoav Goldberg
The process of collecting and annotating training data may introduce distribution artifacts which may limit the ability of models to learn correct generalization behavior.
no code implementations • 1 Jun 2020 • Yanai Elazar, Shauli Ravfogel, Alon Jacovi, Yoav Goldberg
In this work, we point out the inability to infer behavioral conclusions from probing results and offer an alternative method that focuses on how the information is being used, rather than on what information is encoded.
1 code implementation • 1 Jun 2020 • Alon Jacovi, Yoav Goldberg
We find that the requirement of model interpretations to be faithful is vague and incomplete.
no code implementations • ACL 2020 • Alon Jacovi, Yoav Goldberg
With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems.
1 code implementation • EACL 2021 • Alon Jacovi, Gang Niu, Yoav Goldberg, Masashi Sugiyama
We consider the situation in which a user has collected a small set of documents on a cohesive topic, and they want to retrieve additional documents on this topic from a large collection.
no code implementations • WS 2019 • Sima Sharifirad, Alon Jacovi
Sexism is very common in social media and makes the boundaries of free speech tighter for female users.
no code implementations • ICLR 2019 • Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, Jonathan Berant
We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.
2 code implementations • WS 2018 • Alon Jacovi, Oren Sar Shalom, Yoav Goldberg
We present an analysis into the inner workings of Convolutional Neural Networks (CNNs) for processing text.
no code implementations • 24 Apr 2018 • Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, Alon Jacovi
At inference time, we replace each estimator with its existing application counterpart and let the base network solve the task by interacting with the existing application.