no code implementations • 27 Oct 2022 • Andrew Kyle Lampinen
Prior work suggests that LMs cannot handle these structures as reliably as humans can.
no code implementations • 29 Sep 2021 • Wilka Torrico Carvalho, Andrew Kyle Lampinen, Kyriacos Nikiforou, Felix Hill, Murray Shanahan
Taking inspiration from cognitive science, we term representations for reoccurring segments of an agent's experience, "perceptual schemas".
no code implementations • 29 Sep 2021 • Andrew Kyle Lampinen, Nicholas Andrew Roy, Ishita Dasgupta, Stephanie C.Y. Chan, Allison Tam, Chen Yan, Adam Santoro, Neil Charles Rabinowitz, Jane X Wang, Felix Hill
Explanations play a considerable role in human learning, especially in areas that remain major challenges for AI—forming abstractions, and learning about the relational and causal structure of the world.
3 code implementations • NeurIPS 2021 • Andrew Kyle Lampinen, Stephanie C. Y. Chan, Andrea Banino, Felix Hill
Agents with common memory architectures struggle to recall and integrate across multiple timesteps of a past event, or even to recall the details of a single timestep that is followed by distractor tasks.
no code implementations • ICLR 2018 • Andrew Kyle Lampinen, James Lloyd McClelland
Standard deep learning systems require thousands or millions of examples to learn a concept, and cannot integrate new concepts easily.
no code implementations • ICLR 2018 • Andrew Kyle Lampinen, David So, Douglas Eck, Fred Bertsch
GANs provide a framework for training generative models which mimic a data distribution.