no code implementations • ICML 2020 • Michael Chang, Sid Kaushik, S. Matthew Weinberg, Sergey Levine, Thomas Griffiths
This paper seeks to establish a mechanism for directing a collection of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems with a central global objective.
no code implementations • 20 Mar 2023 • Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang
Object rearrangement is a challenge for embodied agents because solving these tasks requires generalizing across a combinatorially large set of configurations of entities and their locations.
no code implementations • 18 Jan 2023 • Adaptive Agent Team, Jakob Bauer, Kate Baumli, Satinder Baveja, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, Vibhavari Dasagi, Lucy Gonzalez, Karol Gregor, Edward Hughes, Sheleem Kashem, Maria Loks-Thompson, Hannah Openshaw, Jack Parker-Holder, Shreya Pathak, Nicolas Perez-Nieves, Nemanja Rakicevic, Tim Rocktäschel, Yannick Schroecker, Jakub Sygnowski, Karl Tuyls, Sarah York, Alexander Zacherl, Lei Zhang
Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL).
no code implementations • 23 Oct 2022 • Pedram Agand, Michael Chang, Mo Chen
However, these cues can be misleading for objects with wide-range variation or adversarial situations, which is a challenging aspect of object-agnostic distance estimation.
1 code implementation • 2 Jul 2022 • Michael Chang, Thomas L. Griffiths, Sergey Levine
Iterative refinement -- start with a random guess, then iteratively improve the guess -- is a useful paradigm for representation learning because it offers a way to break symmetries among equally plausible explanations for the data.
1 code implementation • ICML Workshop URL 2021 • Arnaud Fickinger, Natasha Jaques, Samyak Parajuli, Michael Chang, Nicholas Rhinehart, Glen Berseth, Stuart Russell, Sergey Levine
Unsupervised reinforcement learning (RL) studies how to leverage environment statistics to learn useful behaviors without the cost of reward engineering.
no code implementations • ICLR Workshop Learning_to_Learn 2021 • Michael Chang, Sidhant Kaushik, Sergey Levine, Thomas L. Griffiths
Empirical evidence suggests that such action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.
no code implementations • 5 Jul 2020 • Michael Chang, Sidhant Kaushik, S. Matthew Weinberg, Thomas L. Griffiths, Sergey Levine
This paper seeks to establish a framework for directing a society of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems.
1 code implementation • 28 Oct 2019 • Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua B. Tenenbaum, Sergey Levine
This paper tests the hypothesis that modeling a scene in terms of entities and their local interactions, as opposed to modeling the scene globally, provides a significant benefit in generalizing to physical tasks in a combinatorial space the learner has not encountered before.
no code implementations • 25 Sep 2019 • Sophia Sanborn, Michael Chang, Sergey Levine, Thomas Griffiths
Many approaches to hierarchical reinforcement learning aim to identify sub-goal structure in tasks.
Hierarchical Reinforcement Learning reinforcement-learning +1
1 code implementation • NeurIPS 2019 • Xue Bin Peng, Michael Chang, Grace Zhang, Pieter Abbeel, Sergey Levine
In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors.
no code implementations • 18 Jul 2018 • Sophia Sanborn, David D. Bourgin, Michael Chang, Thomas L. Griffiths
The importance of hierarchically structured representations for tractable planning has long been acknowledged.
3 code implementations • ICLR 2018 • Sjoerd van Steenkiste, Michael Chang, Klaus Greff, Jürgen Schmidhuber
Common-sense physical reasoning is an essential ingredient for any intelligent agent operating in the real-world.
no code implementations • 22 Feb 2016 • William F. Whitney, Michael Chang, tejas kulkarni, Joshua B. Tenenbaum
We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.