The Atari 2600 Games task (and dataset) involves training an agent to achieve high game scores.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
In this work, we build on recent advances in distributional reinforcement learning to give a generally applicable, flexible, and state-of-the-art distributional variant of DQN.
SOTA for Atari Games on Atari 2600 Beam Rider
We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers.
SOTA for Atari Games on Atari 2600 Asteroids
We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning.
SOTA for Atari Games on Atari 2600 Pong
Extending the idea of a locally consistent operator, we then derive sufficient conditions for an operator to preserve optimality, leading to a family of operators which includes our consistent Bellman operator.
In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like Arcade Learning Environment.
We obtain both state-of-the-art results and anecdotal evidence demonstrating the importance of the value distribution in approximate reinforcement learning.
SOTA for Atari Games on Atari 2600 Asterix