Search Results for author: Le Wan

Found 5 papers, 2 papers with code

SEABO: A Simple Search-Based Method for Offline Imitation Learning

1 code implementation6 Feb 2024 Jiafei Lyu, Xiaoteng Ma, Le Wan, Runze Liu, Xiu Li, Zongqing Lu

Offline reinforcement learning (RL) has attracted much attention due to its ability in learning from static offline datasets and eliminating the need of interacting with the environment.

D4RL Imitation Learning +2

Understanding What Affects Generalization Gap in Visual Reinforcement Learning: Theory and Empirical Evidence

no code implementations5 Feb 2024 Jiafei Lyu, Le Wan, Xiu Li, Zongqing Lu

Recently, there are many efforts attempting to learn useful policies for continuous control in visual reinforcement learning (RL).

Continuous Control Learning Theory +1

Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse

no code implementations29 May 2023 Jiafei Lyu, Le Wan, Zongqing Lu, Xiu Li

Empirical results show that SMR significantly boosts the sample efficiency of the base methods across most of the evaluated tasks without any hyperparameter tuning or additional tricks.

Continuous Control Q-Learning +1

Uncertainty-driven Trajectory Truncation for Data Augmentation in Offline Reinforcement Learning

1 code implementation10 Apr 2023 Junjie Zhang, Jiafei Lyu, Xiaoteng Ma, Jiangpeng Yan, Jun Yang, Le Wan, Xiu Li

To empirically show the advantages of TATU, we first combine it with two classical model-based offline RL algorithms, MOPO and COMBO.

D4RL Data Augmentation +3

State Advantage Weighting for Offline RL

no code implementations9 Oct 2022 Jiafei Lyu, Aicheng Gong, Le Wan, Zongqing Lu, Xiu Li

We present state advantage weighting for offline reinforcement learning (RL).

D4RL Offline RL +2

Cannot find the paper you are looking for? You can Submit a new open access paper.