no code implementations • 1 Mar 2024 • Ariel Goldstein, Gabriel Stanovsky
Recent advances in LLMs have sparked a debate on whether they understand text.
no code implementations • 6 Feb 2024 • Amir Taubenfeld, Yaniv Dover, Roi Reichart, Ariel Goldstein
Recent advancements in natural language processing, especially the emergence of Large Language Models (LLMs), have opened exciting possibilities for constructing computational simulations designed to replicate human behavior accurately.
no code implementations • 25 Oct 2023 • Alon Goldstein, Miriam Havin, Roi Reichart, Ariel Goldstein
This paper investigates the problem-solving capabilities of Large Language Models (LLMs) by evaluating their performance on stumpers, unique single-step intuition problems that pose challenges for human solvers but are easily verifiable.
no code implementations • 11 Oct 2023 • Ariel Goldstein, Eric Ham, Mariano Schain, Samuel Nastase, Zaid Zada, Avigail Dabush, Bobbi Aubrey, Harshvardhan Gazula, Amir Feder, Werner K Doyle, Sasha Devore, Patricia Dugan, Daniel Friedman, Roi Reichart, Michael Brenner, Avinatan Hassidim, Orrin Devinsky, Adeen Flinker, Omer Levy, Uri Hasson
Our results reveal a connection between human language processing and DLMs, with the DLM's layer-by-layer accumulation of contextual information mirroring the timing of neural activity in high-order language areas.