Atari Games
277 papers with code • 64 benchmarks • 6 datasets
The Atari 2600 Games task (and dataset) involves training an agent to achieve high game scores.
( Image credit: Playing Atari with Deep Reinforcement Learning )
Libraries
Use these libraries to find Atari Games models and implementationsDatasets
Latest papers with no code
Unlocking the Power of Representations in Long-term Novelty-based Exploration
We introduce Robust Exploration via Clustering-based Online Density Estimation (RECODE), a non-parametric method for novelty-based exploration that estimates visitation counts for clusters of states based on their similarity in a chosen embedding space.
Approximate Shielding of Atari Agents for Safe Exploration
Balancing exploration and conservatism in the constrained setting is an important problem if we are to use reinforcement learning for meaningful tasks in the real world.
Loss of Plasticity in Continual Deep Reinforcement Learning
The ability to learn continually is essential in a complex and changing world.
Double A3C: Deep Reinforcement Learning on OpenAI Gym Games
Reinforcement Learning (RL) is an area of machine learning figuring out how agents take actions in an unknown environment to maximize its rewards.
Understanding plasticity in neural networks
Plasticity, the ability of a neural network to quickly change its predictions in response to new information, is essential for the adaptability and robustness of deep reinforcement learning systems.
Read and Reap the Rewards: Learning to Play Atari with the Help of Instruction Manuals
Therefore, we hypothesize that the ability to utilize human-written instruction manuals to assist learning policies for specific tasks should lead to a more efficient and better-performing agent.
Enabling surrogate-assisted evolutionary reinforcement learning via policy embedding
The training process is accelerated up to 7x on tested games, comparing to its counterpart without the surrogate and PE.
Generalized Munchausen Reinforcement Learning using Tsallis KL Divergence
Many policy optimization approaches in reinforcement learning incorporate a Kullback-Leilbler (KL) divergence to the previous policy, to prevent the policy from changing too quickly.
Multi-compartment Neuron and Population Encoding improved Spiking Neural Network for Deep Distributional Reinforcement Learning
In this paper, we proposed a brain-inspired SNN-based deep distributional reinforcement learning algorithm with combination of bio-inspired multi-compartment neuron (MCN) model and population coding method.
Local-Guided Global: Paired Similarity Representation for Visual Reinforcement Learning
Recent vision-based reinforcement learning (RL) methods have found extracting high-level features from raw pixels with self-supervised learning to be effective in learning policies.