Optimality Inductive Biases and Agnostic Guidelines for Offline Reinforcement Learning

3 Jul 2021  ·  Lionel Blondé, Alexandros Kalousis, Stéphane Marchand-Maillet ·

The performance of state-of-the-art offline RL methods varies widely over the spectrum of dataset qualities, ranging from far-from-optimal random data to close-to-optimal expert demonstrations. We re-implement these methods to test their reproducibility, and show that when a given method outperforms the others on one end of the spectrum, it never does on the other end. This prevents us from naming a victor across the board. We attribute the asymmetry to the amount of inductive bias injected into the agent to entice it to posit that the behavior underlying the offline dataset is optimal for the task. Our investigations confirm that careless injections of such optimality inductive biases make dominant agents subpar as soon as the offline policy is sub-optimal. To bridge this gap, we generalize importance-weighted regression methods that have proved the most versatile across the spectrum of dataset grades into a modular framework that allows for the design of methods that align with how much we know about the dataset. This modularity enables qualitatively different injections of optimality inductive biases. We show that certain orchestrations strike the right balance, improving the return on one end of the spectrum without harming it on the other end. While the formulation of guidelines for the design of an offline method reduces to aligning the amount of optimality bias to inject with what we know about the quality of the data, the design of an agnostic method for which we need not know the quality of the data beforehand is more nuanced. Only our framework allowed us to design a method that performed well across the spectrum while remaining modular if more information about the quality of the data ever becomes available.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here