Replay Memory

Experience Replay

Experience Replay is a replay memory technique used in reinforcement learning where we store the agent’s experiences at each time-step, $e_{t} = \left(s_{t}, a_{t}, r_{t}, s_{t+1}\right)$ in a data-set $D = e_{1}, \cdots, e_{N}$ , pooled over many episodes into a replay memory. We then usually sample the memory randomly for a minibatch of experience, and use this to learn off-policy, as with Deep Q-Networks. This tackles the problem of autocorrelation leading to unstable training, by making the problem more like a supervised learning problem.

Image Credit: Hands-On Reinforcement Learning with Python, Sudharsan Ravichandiran

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Reinforcement Learning (RL) 253 32.15%
Continual Learning 57 7.24%
Continuous Control 47 5.97%
Decision Making 32 4.07%
Multi-agent Reinforcement Learning 23 2.92%
OpenAI Gym 23 2.92%
Atari Games 17 2.16%
Management 16 2.03%
Incremental Learning 15 1.91%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories