Count-Based Exploration with Neural Density Models

Bellemare et al. (2016) introduced the notion of a pseudo-count, derived from a density model, to generalize count-based exploration to non-tabular reinforcement learning. This pseudo-count was used to generate an exploration bonus for a DQN agent and combined with a mixed Monte Carlo update was sufficient to achieve state of the art on the Atari 2600 game Montezuma's Revenge. We consider two questions left open by their work: First, how important is the quality of the density model for exploration? Second, what role does the Monte Carlo update play in exploration? We answer the first question by demonstrating the use of PixelCNN, an advanced neural density model for images, to supply a pseudo-count. In particular, we examine the intrinsic difficulties in adapting Bellemare et al.'s approach when assumptions about the model are violated. The result is a more practical and general algorithm requiring no special apparatus. We combine PixelCNN pseudo-counts with different agent architectures to dramatically improve the state of the art on several hard Atari games. One surprising finding is that the mixed Monte Carlo update is a powerful facilitator of exploration in the sparsest of settings, including Montezuma's Revenge.

PDF Abstract ICML 2017 PDF ICML 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Atari Games Atari 2600 Freeway DQN-PixelCNN Score 31.7 # 29
Atari Games Atari 2600 Freeway DQN-CTS Score 33.0 # 19
Atari Games Atari 2600 Gravitar DQN-CTS Score 238.0 # 50
Atari Games Atari 2600 Gravitar DQN-PixelCNN Score 498.3 # 31
Atari Games Atari 2600 Montezuma's Revenge DQN-PixelCNN Score 3705.5 # 9
Atari Games Atari 2600 Private Eye DQN-PixelCNN Score 8358.7 # 12
Atari Games Atari 2600 Private Eye DQN-CTS Score 206.0 # 34
Atari Games Atari 2600 Venture DQN-PixelCNN Score 82.2 # 34
Atari Games Atari 2600 Venture DQN-CTS Score 48.0 # 37

Methods