Search Results for author: Jonathan D. Cohen

Found 27 papers, 10 papers with code

Slot Abstractors: Toward Scalable Abstract Visual Reasoning

1 code implementation6 Mar 2024 Shanka Subhra Mondal, Jonathan D. Cohen, Taylor W. Webb

Abstract visual reasoning is a characteristically human ability, allowing the identification of relational patterns that are abstracted away from object features, and the systematic generalization of those patterns to unseen problems.

Object Systematic Generalization +1

A Relational Inductive Bias for Dimensional Abstraction in Neural Networks

no code implementations28 Feb 2024 Declan Campbell, Jonathan D. Cohen

The human cognitive system exhibits remarkable flexibility and generalization capabilities, partly due to its ability to form low-dimensional, compositional representations of the environment.

Inductive Bias

Human-Like Geometric Abstraction in Large Pre-trained Neural Networks

no code implementations6 Feb 2024 Declan Campbell, Sreejan Kumar, Tyler Giallanza, Thomas L. Griffiths, Jonathan D. Cohen

Humans possess a remarkable capacity to recognize and manipulate abstract structure, which is especially apparent in the domain of geometry.

Relational Constraints On Neural Networks Reproduce Human Biases towards Abstract Geometric Regularity

no code implementations29 Sep 2023 Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths

Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors.

A Quantitative Approach to Predicting Representational Learning and Performance in Neural Networks

no code implementations14 Jul 2023 Ryan Pyle, Sebastian Musslick, Jonathan D. Cohen, Ankit B. Patel

A key property of neural networks (both biological and artificial) is how they learn to represent and manipulate input information in order to solve a task.

Determinantal Point Process Attention Over Grid Cell Code Supports Out of Distribution Generalization

1 code implementation28 May 2023 Shanka Subhra Mondal, Steven Frankland, Taylor Webb, Jonathan D. Cohen

Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies.

Out-of-Distribution Generalization

Learning to reason over visual objects

1 code implementation3 Mar 2023 Shanka Subhra Mondal, Taylor Webb, Jonathan D. Cohen

These results suggest that an inductive bias for object-centric processing may be a key component of abstract visual reasoning, obviating the need for problem-specific inductive biases.

Inductive Bias Visual Reasoning

Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines

1 code implementation23 May 2022 Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Nathaniel D. Daw, Jonathan D. Cohen, Karthik Narasimhan, Thomas L. Griffiths

Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.

Meta-Learning Meta Reinforcement Learning +2

Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning

1 code implementation4 Apr 2022 Sreejan Kumar, Ishita Dasgupta, Nathaniel D. Daw, Jonathan D. Cohen, Thomas L. Griffiths

However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction.

BIG-bench Machine Learning Inductive Bias +4

A Self-Supervised Framework for Function Learning and Extrapolation

no code implementations14 Jun 2021 Simon N. Segert, Jonathan D. Cohen

Understanding how agents learn to generalize -- and, in particular, to extrapolate -- in high-dimensional, naturalistic environments remains a challenge for both machine learning and the study of biological agents.

Inductive Bias Time Series +1

People construct simplified mental representations to plan

no code implementations14 May 2021 Mark K. Ho, David Abel, Carlos G. Correa, Michael L. Littman, Jonathan D. Cohen, Thomas L. Griffiths

We propose a computational account of this simplification process and, in a series of pre-registered behavioral experiments, show that it is subject to online cognitive control and that people optimally balance the complexity of a task representation and its utility for planning and acting.

Human Inference in Changing Environments With Temporal Structure

no code implementations27 Jan 2021 Arthur Prat-Carrabin, Robert C. Wilson, Jonathan D. Cohen, Rava Azeredo da Silveira

We show that humans adapt their inference process to fine aspects of the temporal structure in the statistics of stimuli.

Bayesian Inference

Emergent Symbols through Binding in External Memory

2 code implementations ICLR 2021 Taylor W. Webb, Ishan Sinha, Jonathan D. Cohen

A key aspect of human intelligence is the ability to infer abstract rules directly from high-dimensional sensory data, and to do so given only a limited amount of training experience.

A Memory-Augmented Neural Network Model of Abstract Rule Learning

no code implementations13 Dec 2020 Ishan Sinha, Taylor W. Webb, Jonathan D. Cohen

Further, we introduce the Emergent Symbol Binding Network (ESBN), a recurrent neural network model that learns to use an external memory as a binding mechanism.

A Mitigation Score for COVID-19

no code implementations2 Dec 2020 Jonathan D. Cohen

This note describes a simple score to indicate the effectiveness of mitigation against infections of COVID-19 as observed by new case counts.

Meta-Learning of Structured Task Distributions in Humans and Machines

1 code implementation ICLR 2021 Sreejan Kumar, Ishita Dasgupta, Jonathan D. Cohen, Nathaniel D. Daw, Thomas L. Griffiths

We then introduce a novel approach to constructing a "null task distribution" with the same statistical complexity as this structured task distribution but without the explicit rule-based structure used to generate the structured task.

Meta-Learning Meta Reinforcement Learning +2

Learning Representations that Support Extrapolation

1 code implementation ICML 2020 Taylor W. Webb, Zachary Dulberg, Steven M. Frankland, Alexander A. Petrov, Randall C. O'Reilly, Jonathan D. Cohen

Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence.

The Efficiency of Human Cognition Reflects Planned Information Processing

no code implementations13 Feb 2020 Mark K. Ho, David Abel, Jonathan D. Cohen, Michael L. Littman, Thomas L. Griffiths

Thus, people should plan their actions, but they should also be smart about how they deploy resources used for planning their actions.

Context Matters: Recovering Human Semantic Structure from Machine Learning Analysis of Large-Scale Text Corpora

no code implementations15 Oct 2019 Marius Cătălin Iordan, Tyler Giallanza, Cameron T. Ellis, Nicole M. Beckage, Jonathan D. Cohen

Applying machine learning algorithms to large-scale, text-based corpora (embeddings) presents a unique opportunity to investigate at scale how human semantic knowledge is organized and how people use it to judge fundamental relationships, such as similarity between concepts.

BIG-bench Machine Learning Empirical Judgments

A graph-theoretic approach to multitasking

no code implementations NeurIPS 2017 Noga Alon, Daniel Reichman, Igor Shinkar, Tal Wagner, Sebastian Musslick, Jonathan D. Cohen, Tom Griffiths, Biswadip Dey, Kayhan Ozcimder

A key feature of neural network architectures is their ability to support the simultaneous interaction among large numbers of units in the learning and processing of representations.

Matrix-normal models for fMRI analysis

1 code implementation8 Nov 2017 Michael Shvartsman, Narayanan Sundaram, Mikio C. Aoi, Adam Charles, Theodore C. Wilke, Jonathan D. Cohen

We show how the matrix-variate normal (MN) formalism can unify some of these methods into a single framework.

A Theory of Decision Making Under Dynamic Context

1 code implementation NeurIPS 2015 Michael Shvartsman, Vaibhav Srivastava, Jonathan D. Cohen

We also show how the model generalizes re- cent work on the control of attention in the Flanker task (Yu et al., 2009).

Decision Making

Learning to Use Working Memory in Partially Observable Environments through Dopaminergic Reinforcement

no code implementations NeurIPS 2008 Michael T. Todd, Yael Niv, Jonathan D. Cohen

Working memory is a central topic of cognitive neuroscience because it is critical for solving real world problems in which information from multiple temporally distant sources must be combined to generate appropriate behavior.

Sequential effects: Superstition or rational behavior?

no code implementations NeurIPS 2008 Angela J. Yu, Jonathan D. Cohen

In a variety of behavioral tasks, subjects exhibit an automatic and apparently sub-optimal sequential effect: they respond more rapidly and accurately to a stimulus if it reinforces a local pattern in stimulus history, such as a string of repetitions or alternations, compared to when it violates such a pattern.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.