Search Results for author: Yoav Shoham

Found 13 papers, 4 papers with code

Generating Benchmarks for Factuality Evaluation of Language Models

2 code implementations13 Jul 2023 Dor Muhlgay, Ori Ram, Inbal Magar, Yoav Levine, Nir Ratner, Yonatan Belinkov, Omri Abend, Kevin Leyton-Brown, Amnon Shashua, Yoav Shoham

FACTOR automatically transforms a factual corpus of interest into a benchmark evaluating an LM's propensity to generate true facts from the corpus vs. similar but incorrect statements.

Language Modelling Retrieval

Human or Not? A Gamified Approach to the Turing Test

no code implementations31 May 2023 Daniel Jannai, Amos Meron, Barak Lenz, Yoav Levine, Yoav Shoham

Over the course of a month, the game was played by over 1. 5 million users who engaged in anonymous two-minute chat sessions with either another human or an AI language model which was prompted to behave like humans.

Language Modelling

In-Context Retrieval-Augmented Language Models

1 code implementation31 Jan 2023 Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

Retrieval-Augmented Language Modeling (RALM) methods, which condition a language model (LM) on relevant documents from a grounding corpus during generation, were shown to significantly improve language modeling performance.

Language Modelling Retrieval +1

Parallel Context Windows for Large Language Models

1 code implementation21 Dec 2022 Nir Ratner, Yoav Levine, Yonatan Belinkov, Ori Ram, Inbal Magar, Omri Abend, Ehud Karpas, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

We present Parallel Context Windows (PCW), a method that alleviates the context window restriction for any off-the-shelf LLM without further training.

In-Context Learning Playing the Game of 2048 +2

Standing on the Shoulders of Giant Frozen Language Models

no code implementations21 Apr 2022 Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown, Yoav Shoham

To demonstrate this, we introduce three novel methods for leveraging frozen models: input-dependent prompt tuning, frozen readers, and recursive LMs, each of which vastly improves on current frozen-model approaches.

PMI-Masking: Principled masking of correlated spans

1 code implementation ICLR 2021 Yoav Levine, Barak Lenz, Opher Lieber, Omri Abend, Kevin Leyton-Brown, Moshe Tennenholtz, Yoav Shoham

Specifically, we show experimentally that PMI-Masking reaches the performance of prior masking approaches in half the training time, and consistently improves performance at the end of training.

The Cost of Training NLP Models: A Concise Overview

no code implementations19 Apr 2020 Or Sharir, Barak Peleg, Yoav Shoham

We review the cost of training large-scale language models, and the drivers of these costs.

Cannot find the paper you are looking for? You can Submit a new open access paper.