Accountable Off-Policy Evaluation via a Kernelized Bellman Statistics

ICML 2020  ·  Yihao Feng, Tongzheng Ren, Ziyang Tang, Qiang Liu ·

Off-policy evaluation plays an important role in modern reinforcement learning. However, most of the existing off-policy evaluation only focus on the value estimation, without providing an accountable confidence interval, that can reflect the uncertainty caused by limited observed data and algorithmic errors. Recently, Feng et al. (2019) proposed a novel kernel loss for learning value functions, which can also be used to test whether the learned value function satisfies the Bellman equation. In this work, we investigate the statistical properties of the kernel loss, which allows us to find a feasible set that contains the true value function with high probability. We further utilize this set to construct an accountable confidence interval for off-policy value estimation, and a post-hoc diagnosis for existing estimators. Empirical results show that our methods yield a tight yet accountable confidence interval in different settings, which demonstrate the effectiveness of our method.

PDF ICML 2020 PDF
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here