no code implementations • 17 Nov 2022 • Nicholas A. Roy, Junkyung Kim, Neil Rabinowitz
We take a pragmatic view of the issue, and define a set of desiderata that capture both the ambitions of XAI and the practical constraints of deep learning.
no code implementations • 8 Apr 2022 • Allison C. Tam, Neil C. Rabinowitz, Andrew K. Lampinen, Nicholas A. Roy, Stephanie C. Y. Chan, DJ Strouse, Jane X. Wang, Andrea Banino, Felix Hill
We show that these pretrained representations drive meaningful, task-relevant exploration and improve performance on 3D simulated environments.
1 code implementation • 7 Dec 2021 • Andrew K. Lampinen, Nicholas A. Roy, Ishita Dasgupta, Stephanie C. Y. Chan, Allison C. Tam, James L. McClelland, Chen Yan, Adam Santoro, Neil C. Rabinowitz, Jane X. Wang, Felix Hill
Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents.
1 code implementation • NeurIPS 2020 • Zoe Ashwood, Nicholas A. Roy, Ji Hyun Bak, Jonathan W. Pillow
Specifically, this allows us to: (i) compare different learning rules and objective functions that an animal may be using to update its policy; (ii) estimate distinct learning rates for different parameters of an animal’s policy; (iii) identify variations in learning across cohorts of animals; and (iv) uncover trial-to-trial changes that are not captured by normative learning rules.
no code implementations • NeurIPS 2020 • Genevieve Flaspohler, Nicholas A. Roy, John W. Fisher III
This work introduces macro-action discovery using value-of-information (VoI) for robust and efficient planning in partially observable Markov decision processes (POMDPs).