Off-Policy TD Control

Clipped Double Q-learning

Introduced by Fujimoto et al. in Addressing Function Approximation Error in Actor-Critic Methods

Clipped Double Q-learning is a variant on Double Q-learning that upper-bounds the less biased Q estimate $Q_{\theta_{2}}$ by the biased estimate $Q_{\theta_{1}}$. This is equivalent to taking the minimum of the two estimates, resulting in the following target update:

$$ y_{1} = r + \gamma\min_{i=1,2}Q_{\theta'_{i}}\left(s', \pi_{\phi_{1}}\left(s'\right)\right) $$

The motivation for this extension is that vanilla double Q-learning is sometimes ineffective if the target and current networks are too similar, e.g. with a slow-changing policy in an actor-critic framework.

Source: Addressing Function Approximation Error in Actor-Critic Methods

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Reinforcement Learning (RL) 61 41.22%
Continuous Control 27 18.24%
OpenAI Gym 9 6.08%
Decision Making 7 4.73%
Autonomous Driving 5 3.38%
Offline RL 3 2.03%
Meta-Learning 3 2.03%
Benchmarking 3 2.03%
D4RL 2 1.35%

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories