Data Informed Residual Reinforcement Learning for High-Dimensional Robotic Tracking Control

28 Oct 2021  ·  Cong Li, Fangzhou Liu, Yongchao Wang, Martin Buss ·

The learning inefficiency of reinforcement learning (RL) from scratch hinders its practical application towards continuous robotic tracking control, especially for high-dimensional robots. This work proposes a data informed residual reinforcement learning (DR-RL) based robotic tracking control scheme applicable to robots with high dimensionality. The proposed DR-RL methodology outperforms its standard RL from scratch counterpart regarding sample efficiency and scalability. Specifically, we first decouple the original robot into low-dimensional robotic subsystems; and further utilize one-step backward (OSBK) data to construct incremental subsystems that are equivalent model-free representations of the above decoupled robotic subsystems. The formulated incremental subsystems allow for parallel learning to relieve computation load and offer us mathematical descriptions of robotic movements for conducting theoretical analysis. Then, we apply DR-RL to learn the tracking control policy, a combination of incremental base policy and incremental residual policy, under a parallel learning architecture. The incremental residual policy uses the guidance from the incremental base policy as the learning initialization and further learns from interactions with environments to endow the tracking control policy with adaptability towards dynamically changing environments. Our proposed DR-RL based tracking control scheme is developed with rigorous theoretical analysis of system stability and weight convergence, and validated numerically on comparative simulations and also experimentally on a 3-DoF robot manipulator that would fail for other counterpart RL methods.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here