Reducing Estimation Bias via Weighted Delayed Deep Deterministic Policy Gradient

18 Jun 2020  ·  Qiang He, Xinwen Hou ·

The overestimation phenomenon caused by function approximation is a well-known issue in value-based reinforcement learning algorithms such as deep Q-networks and DDPG, which could lead to suboptimal policies. To address this issue, TD3 takes the minimum value between a pair of critics, which introduces underestimation bias... By unifying these two opposites, we propose a novel Weighted Delayed Deep Deterministic Policy Gradient algorithm, which can reduce the estimation error and further improve the performance by weighting a pair of critics. We compare the learning process of value function between DDPG, TD3, and our proposed algorithm, which verifies that our algorithm could indeed eliminate the estimation error of value function. We evaluate our algorithm in the OpenAI Gym continuous control tasks, outperforming the state-of-the-art algorithms on every environment tested. read more

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods