Q-Learning Networks

Random Ensemble Mixture

Introduced by Agarwal et al. in An Optimistic Perspective on Offline Reinforcement Learning

Random Ensemble Mixture (REM) is an easy to implement extension of DQN inspired by Dropout. The key intuition behind REM is that if one has access to multiple estimates of Q-values, then a weighted combination of the Q-value estimates is also an estimate for Q-values. Accordingly, in each training step, REM randomly combines multiple Q-value estimates and uses this random combination for robust training.

Source: An Optimistic Perspective on Offline Reinforcement Learning

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Reinforcement Learning (RL) 6 11.11%
EEG 4 7.41%
DQN Replay Dataset 3 5.56%
Offline RL 3 5.56%
Computational Efficiency 2 3.70%
Management 2 3.70%
Continual Learning 2 3.70%
Super-Resolution 2 3.70%
Atari Games 2 3.70%

Components


Component Type
DQN
Q-Learning Networks

Categories