Search Results for author: Eric Schulz

Found 20 papers, 9 papers with code

In-context learning agents are asymmetric belief updaters

no code implementations6 Feb 2024 Johannes A. Schubert, Akshay K. Jagadish, Marcel Binz, Eric Schulz

We study the in-context learning dynamics of large language models (LLMs) using three instrumental learning tasks adapted from cognitive psychology.

counterfactual In-Context Learning +1

Ecologically rational meta-learned inference explains human category learning

no code implementations2 Feb 2024 Akshay K. Jagadish, Julian Coda-Forno, Mirko Thalmann, Eric Schulz, Marcel Binz

We tackle the second challenge by deriving rational agents adapted to these tasks using the framework of meta-learning, leading to a class of models called ecologically rational meta-learned inference (ERMI).

Meta-Learning valid

Predicting the Future with Simple World Models

no code implementations31 Jan 2024 Tankred Saanum, Peter Dayan, Eric Schulz

Abstracting the dynamics of the environment with simple models can have several benefits.

Video Prediction

Visual cognition in multimodal large language models

1 code implementation27 Nov 2023 Luca M. Schulze Buschoff, Elif Akata, Matthias Bethge, Eric Schulz

A chief goal of artificial intelligence is to build machines that think like people.

The Acquisition of Physical Knowledge in Generative Neural Networks

1 code implementation30 Oct 2023 Luca M. Schulze Buschoff, Eric Schulz, Marcel Binz

As children grow older, they develop an intuitive understanding of the physical processes around them.

Stochastic Optimization

Turning large language models into cognitive models

1 code implementation6 Jun 2023 Marcel Binz, Eric Schulz

We find that -- after finetuning them on data from psychological experiments -- these models offer accurate representations of human behavior, even outperforming traditional cognitive models in two decision-making domains.

Decision Making Mathematical Reasoning

Playing repeated games with Large Language Models

no code implementations26 May 2023 Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz

In a large set of two players-two strategies games, we find that LLMs are particularly good at games where valuing their own self-interest pays off, like the iterated Prisoner's Dilemma family.

In-Context Impersonation Reveals Large Language Models' Strengths and Biases

1 code implementation NeurIPS 2023 Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto, Eric Schulz, Zeynep Akata

These findings demonstrate that LLMs are capable of taking on diverse roles and that this in-context impersonation can be used to uncover their hidden strengths and biases.

Meta-Learned Models of Cognition

1 code implementation12 Apr 2023 Marcel Binz, Ishita Dasgupta, Akshay Jagadish, Matthew Botvinick, Jane X. Wang, Eric Schulz

Meta-learning is a framework for learning learning algorithms through repeated interactions with an environment as opposed to designing them by hand.

Bayesian Inference Meta-Learning

Stochastic Gradient Descent Captures How Children Learn About Physics

1 code implementation25 Sep 2022 Luca M. Schulze Buschoff, Eric Schulz, Marcel Binz

We find that the model's learning trajectory captures the developmental trajectories of children, thereby providing support to the idea of development as stochastic optimization.

Stochastic Optimization

Using cognitive psychology to understand GPT-3

no code implementations21 Jun 2022 Marcel Binz, Eric Schulz

We study GPT-3, a recent large language model, using tools from cognitive psychology.

Decision Making Language Modelling +2

Modeling Human Exploration Through Resource-Rational Reinforcement Learning

1 code implementation27 Jan 2022 Marcel Binz, Eric Schulz

Equipping artificial agents with useful exploration mechanisms remains a challenge to this day.

Meta-Learning reinforcement-learning +2

Learning Structure from the Ground up---Hierarchical Representation Learning by Chunking

no code implementations29 Sep 2021 Shuchen Wu, Noemi Elteto, Ishita Dasgupta, Eric Schulz

As learning progresses, a hierarchy of chunk representation is acquired by chunking previously learned representations into more complex representations guided by sequential dependence.

Chunking Representation Learning

Better safe than sorry: Risky function exploitation through safe optimization

no code implementations2 Feb 2016 Eric Schulz, Quentin J. M. Huys, Dominik R. Bach, Maarten Speekenbrink, Andreas Krause

Exploration-exploitation of functions, that is learning and optimizing a mapping between inputs and expected outputs, is ubiquitous to many real world situations.

Cannot find the paper you are looking for? You can Submit a new open access paper.