Double Q-learning: New Analysis and Sharper Finite-time Bound

1 Jan 2021 Anonymous

Double Q-learning (Hasselt 2010) has gained significant success in practice due to its effectiveness in overcoming the overestimation issue of Q-learning. However, theoretical understanding of double Q-learning is rather limited and the only existing finite-time analysis was recently established in (Xiong et al. 2020) under a polynomial learning rate... (read more)

PDF Abstract
No code implementations yet. Submit your code now


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper

Off-Policy TD Control
Double Q-learning
Off-Policy TD Control