Search Results for author: Sreejan Kumar

Found 7 papers, 3 papers with code

Comparing Abstraction in Humans and Large Language Models Using Multimodal Serial Reproduction

no code implementations6 Feb 2024 Sreejan Kumar, Raja Marjieh, Byron Zhang, Declan Campbell, Michael Y. Hu, Umang Bhatt, Brenden Lake, Thomas L. Griffiths

To investigate the effect language on the formation of abstractions, we implement a novel multimodal serial reproduction framework by asking people who receive a visual stimulus to reproduce it in a linguistic format, and vice versa.

Human-Like Geometric Abstraction in Large Pre-trained Neural Networks

no code implementations6 Feb 2024 Declan Campbell, Sreejan Kumar, Tyler Giallanza, Thomas L. Griffiths, Jonathan D. Cohen

Humans possess a remarkable capacity to recognize and manipulate abstract structure, which is especially apparent in the domain of geometry.

Relational Constraints On Neural Networks Reproduce Human Biases towards Abstract Geometric Regularity

no code implementations29 Sep 2023 Declan Campbell, Sreejan Kumar, Tyler Giallanza, Jonathan D. Cohen, Thomas L. Griffiths

Uniquely among primates, humans possess a remarkable capacity to recognize and manipulate abstract structure in the service of task goals across a broad range of behaviors.

Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines

1 code implementation23 May 2022 Sreejan Kumar, Carlos G. Correa, Ishita Dasgupta, Raja Marjieh, Michael Y. Hu, Robert D. Hawkins, Nathaniel D. Daw, Jonathan D. Cohen, Karthik Narasimhan, Thomas L. Griffiths

Co-training on these representations result in more human-like behavior in downstream meta-reinforcement learning agents than less abstract controls (synthetic language descriptions, program induction without learned primitives), suggesting that the abstraction supported by these representations is key.

Meta-Learning Meta Reinforcement Learning +2

Disentangling Abstraction from Statistical Pattern Matching in Human and Machine Learning

1 code implementation4 Apr 2022 Sreejan Kumar, Ishita Dasgupta, Nathaniel D. Daw, Jonathan D. Cohen, Thomas L. Griffiths

However, because neural networks are hard to interpret, it can be difficult to tell whether agents have learned the underlying abstraction, or alternatively statistical patterns that are characteristic of that abstraction.

BIG-bench Machine Learning Inductive Bias +4

Meta-Learning of Structured Task Distributions in Humans and Machines

1 code implementation ICLR 2021 Sreejan Kumar, Ishita Dasgupta, Jonathan D. Cohen, Nathaniel D. Daw, Thomas L. Griffiths

We then introduce a novel approach to constructing a "null task distribution" with the same statistical complexity as this structured task distribution but without the explicit rule-based structure used to generate the structured task.

Meta-Learning Meta Reinforcement Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.