no code implementations • 16 Nov 2023 • Thomas L. Griffiths, Jian-Qiao Zhu, Erin Grant, R. Thomas McCoy
The success of methods based on artificial neural networks in creating intelligent machines seems like it might pose a challenge to explanations of human cognition in terms of Bayesian inference.
no code implementations • NeurIPS 2023 • Aaditya K. Singh, Stephanie C. Y. Chan, Ted Moskovitz, Erin Grant, Andrew M. Saxe, Felix Hill
The transient nature of ICL is observed in transformers across a range of model sizes and datasets, raising the question of how much to "overtrain" transformers when seeking compact, cheaper-to-run models.
no code implementations • 18 Oct 2023 • Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C. Love, Erin Grant, Iris Groen, Jascha Achterberg, Joshua B. Tenenbaum, Katherine M. Collins, Katherine L. Hermann, Kerem Oktar, Klaus Greff, Martin N. Hebart, Nori Jacoby, Qiuyi Zhang, Raja Marjieh, Robert Geirhos, Sherol Chen, Simon Kornblith, Sunayana Rane, Talia Konkle, Thomas P. O'Connell, Thomas Unterthiner, Andrew K. Lampinen, Klaus-Robert Müller, Mariya Toneva, Thomas L. Griffiths
Finally, we lay out open problems in representational alignment where progress can benefit all three of these fields.
no code implementations • 29 Sep 2023 • Erin Grant, Sandra Nestler, Berfin Şimşek, Sara Solla
Lecture notes from the course given by Professor Sara A. Solla at the Les Houches summer school on "Statistical physics of Machine Learning".
no code implementations • 11 Aug 2022 • Michael Y. Li, Erin Grant, Thomas L. Griffiths
Not being able to understand and predict the behavior of deep learning systems makes it hard to decide what architecture and algorithm to use for a given problem.
1 code implementation • 8 Oct 2021 • Ishita Dasgupta, Erin Grant, Thomas L. Griffiths
Machine learning systems often do not share the same inductive biases as humans and, as a result, extrapolate or generalize in ways that are inconsistent with our expectations.
1 code implementation • NeurIPS 2021 • Thomas A. Langlois, H. Charles Zhao, Erin Grant, Ishita Dasgupta, Thomas L. Griffiths, Nori Jacoby
Similarly, we find that recognition performance in the same ANN models was likewise influenced by masking input images using human visual selectivity maps.
1 code implementation • 15 May 2021 • Shikhar Tuli, Ishita Dasgupta, Erin Grant, Thomas L. Griffiths
Our focus is on comparing a suite of standard Convolutional Neural Networks (CNNs) and a recently-proposed attention-based network, the Vision Transformer (ViT), which relaxes the translation-invariance constraint of CNNs and therefore represents a model with a weaker set of inductive biases.
no code implementations • 27 Nov 2020 • Rachit Dubey, Erin Grant, Michael Luo, Karthik Narasimhan, Thomas Griffiths
This work connects the context-sensitive nature of cognitive control to a method for meta-learning with context-conditioned adaptation.
1 code implementation • 29 Jun 2020 • R. Thomas McCoy, Erin Grant, Paul Smolensky, Thomas L. Griffiths, Tal Linzen
To facilitate computational modeling aimed at addressing this question, we introduce a framework for giving particular linguistic inductive biases to a neural network model; such a model can then be used to empirically explore the effects of those inductive biases.
no code implementations • ICLR 2019 • Erin Grant, Ghassen Jerfel, Katherine Heller, Thomas L. Griffiths
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.
no code implementations • NeurIPS 2019 • Ghassen Jerfel, Erin Grant, Thomas L. Griffiths, Katherine Heller
Learning-to-learn or meta-learning leverages data-driven inductive bias to increase the efficiency of learning on a novel task.
no code implementations • WS 2018 • Kaylee Burns, Aida Nematzadeh, Erin Grant, Alison Gopnik, Tom Griffiths
The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity.
2 code implementations • EMNLP 2018 • Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, Thomas L. Griffiths
We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs.
no code implementations • ICLR 2018 • Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, Thomas Griffiths
Meta-learning allows an intelligent agent to leverage prior learning episodes as a basis for quickly improving performance on a novel task.
1 code implementation • 18 Feb 2016 • Erin Grant, Aida Nematzadeh, Suzanne Stevenson
People exhibit a tendency to generalize a novel noun to the basic-level in a hierarchical taxonomy -- a cognitively salient category such as "dog" -- with the degree of generalization depending on the number and type of exemplars.