Search Results for author: Michael Chang

Found 14 papers, 5 papers with code

Global Decision-Making via Local Economic Transactions

no code implementations ICML 2020 Michael Chang, Sid Kaushik, S. Matthew Weinberg, Sergey Levine, Thomas Griffiths

This paper seeks to establish a mechanism for directing a collection of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems with a central global objective.

Decision Making

Neural Constraint Satisfaction: Hierarchical Abstraction for Combinatorial Generalization in Object Rearrangement

no code implementations20 Mar 2023 Michael Chang, Alyssa L. Dayan, Franziska Meier, Thomas L. Griffiths, Sergey Levine, Amy Zhang

Object rearrangement is a challenge for embodied agents because solving these tasks requires generalizing across a combinatorially large set of configurations of entities and their locations.

DMODE: Differential Monocular Object Distance Estimation Module without Class Specific Information

no code implementations23 Oct 2022 Pedram Agand, Michael Chang, Mo Chen

However, these cues can be misleading for objects with wide-range variation or adversarial situations, which is a challenging aspect of object-agnostic distance estimation.

Object Position

Object Representations as Fixed Points: Training Iterative Refinement Algorithms with Implicit Differentiation

1 code implementation2 Jul 2022 Michael Chang, Thomas L. Griffiths, Sergey Levine

Iterative refinement -- start with a random guess, then iteratively improve the guess -- is a useful paradigm for representation learning because it offers a way to break symmetries among equally plausible explanations for the data.

Representation Learning

Modularity in Reinforcement Learning via Algorithmic Independence in Credit Assignment

no code implementations ICLR Workshop Learning_to_Learn 2021 Michael Chang, Sidhant Kaushik, Sergey Levine, Thomas L. Griffiths

Empirical evidence suggests that such action-value methods are more sample efficient than policy-gradient methods on transfer problems that require only sparse changes to a sequence of previously optimal decisions.

Decision Making Policy Gradient Methods +2

Decentralized Reinforcement Learning: Global Decision-Making via Local Economic Transactions

no code implementations5 Jul 2020 Michael Chang, Sidhant Kaushik, S. Matthew Weinberg, Thomas L. Griffiths, Sergey Levine

This paper seeks to establish a framework for directing a society of simple, specialized, self-interested agents to solve what traditionally are posed as monolithic single-agent sequential decision problems.

Decision Making reinforcement-learning +2

Entity Abstraction in Visual Model-Based Reinforcement Learning

1 code implementation28 Oct 2019 Rishi Veerapaneni, John D. Co-Reyes, Michael Chang, Michael Janner, Chelsea Finn, Jiajun Wu, Joshua B. Tenenbaum, Sergey Levine

This paper tests the hypothesis that modeling a scene in terms of entities and their local interactions, as opposed to modeling the scene globally, provides a significant benefit in generalizing to physical tasks in a combinatorial space the learner has not encountered before.

Model-based Reinforcement Learning Object +5

MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies

1 code implementation NeurIPS 2019 Xue Bin Peng, Michael Chang, Grace Zhang, Pieter Abbeel, Sergey Levine

In this work, we propose multiplicative compositional policies (MCP), a method for learning reusable motor skills that can be composed to produce a range of complex behaviors.

Continuous Control

Understanding Visual Concepts with Continuation Learning

no code implementations22 Feb 2016 William F. Whitney, Michael Chang, tejas kulkarni, Joshua B. Tenenbaum

We introduce a neural network architecture and a learning algorithm to produce factorized symbolic representations.

Atari Games

Cannot find the paper you are looking for? You can Submit a new open access paper.