Finite Sample Analysis of LSTD with Random Projections and Eligibility Traces

25 May 2018  ·  Haifang Li, Yingce Xia, Wensheng Zhang ·

Policy evaluation with linear function approximation is an important problem in reinforcement learning. When facing high-dimensional feature spaces, such a problem becomes extremely hard considering the computation efficiency and quality of approximations. We propose a new algorithm, LSTD($\lambda$)-RP, which leverages random projection techniques and takes eligibility traces into consideration to tackle the above two challenges. We carry out theoretical analysis of LSTD($\lambda$)-RP, and provide meaningful upper bounds of the estimation error, approximation error and total generalization error. These results demonstrate that LSTD($\lambda$)-RP can benefit from random projection and eligibility traces strategies, and LSTD($\lambda$)-RP can achieve better performances than prior LSTD-RP and LSTD($\lambda$) algorithms.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here