Self-Supervised Exploration via Disagreement

10 Jun 2019  ·  Deepak Pathak, Dhiraj Gandhi, Abhinav Gupta ·

Efficient exploration is a long-standing problem in sensorimotor learning. Major advances have been demonstrated in noise-free, non-stochastic domains such as video games and simulation. However, most of these formulations either get stuck in environments with stochastic dynamics or are too inefficient to be scalable to real robotics setups. In this paper, we propose a formulation for exploration inspired by the work in active learning literature. Specifically, we train an ensemble of dynamics models and incentivize the agent to explore such that the disagreement of those ensembles is maximized. This allows the agent to learn skills by exploring in a self-supervised manner without any external reward. Notably, we further leverage the disagreement objective to optimize the agent's policy in a differentiable manner, without using reinforcement learning, which results in a sample-efficient exploration. We demonstrate the efficacy of this formulation across a variety of benchmark environments including stochastic-Atari, Mujoco and Unity. Finally, we implement our differentiable exploration on a real robot which learns to interact with objects completely from scratch. Project videos and code are at https://pathak22.github.io/exploration-by-disagreement/

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Unsupervised Reinforcement Learning URLB (pixels, 10^5 frames) Disagreement Walker (mean normalized return) 17.36±10.71 # 4
Quadruped (mean normalized return) 18.96±5.93 # 10
Jaco (mean normalized return) 10.74±3.28 # 5
Unsupervised Reinforcement Learning URLB (pixels, 10^6 frames) Disagreement Walker (mean normalized return) 43.03±21.11 # 1
Quadruped (mean normalized return) 19.60±6.28 # 8
Jaco (mean normalized return) 29.56±9.66 # 2
Unsupervised Reinforcement Learning URLB (pixels, 2*10^6 frames) Disagreement Walker (mean normalized return) 43.18±20.03 # 2
Quadruped (mean normalized return) 22.00±6.92 # 6
Jaco (mean normalized return) 54.95±9.23 # 2
Unsupervised Reinforcement Learning URLB (pixels, 5*10^5 frames) Disagreement Walker (mean normalized return) 35.91±13.78 # 1
Quadruped (mean normalized return) 22.21±6.96 # 7
Jaco (mean normalized return) 17.89±3.75 # 3
Unsupervised Reinforcement Learning URLB (states, 10^5 frames) Disagreement Walker (mean normalized return) 83.39±32.77 # 1
Quadruped (mean normalized return) 31.98±8.496 # 5
Jaco (mean normalized return) 71.26±11.14 # 4
Unsupervised Reinforcement Learning URLB (states, 10^6 frames) Disagreement Walker (mean normalized return) 83.85±29.19 # 2
Quadruped (mean normalized return) 70.20±12.73 # 1
Jaco (mean normalized return) 73.10±9.01 # 1
Unsupervised Reinforcement Learning URLB (states, 2*10^6 frames) Disagreement Walker (mean normalized return) 76.86±29.64 # 3
Quadruped (mean normalized return) 75.39±14.50 # 1
Jaco (mean normalized return) 63.61±6.32 # 1
Unsupervised Reinforcement Learning URLB (states, 5*10^5 frames) Disagreement Walker (mean normalized return) 85.32±31.62 # 2
Quadruped (mean normalized return) 52.64±11.01 # 3
Jaco (mean normalized return) 77.89±10.89 # 1

Methods


No methods listed for this paper. Add relevant methods here