Search Results for author: Megi Dervishi

Found 1 papers, 1 papers with code

WorldSense: A Synthetic Benchmark for Grounded Reasoning in Large Language Models

1 code implementation27 Nov 2023 Youssef Benchekroun, Megi Dervishi, Mark Ibrahim, Jean-Baptiste Gaya, Xavier Martinet, Grégoire Mialon, Thomas Scialom, Emmanuel Dupoux, Dieuwke Hupkes, Pascal Vincent

We propose WorldSense, a benchmark designed to assess the extent to which LLMs are consistently able to sustain tacit world models, by testing how they draw simple inferences from descriptions of simple arrangements of entities.

In-Context Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.