Model-Free Incremental Adaptive Dynamic Programming Based Approximate Robust Optimal Regulation

4 May 2021  ·  Cong Li, Yongchao Wang, Fangzhou Liu, Qingchen Liu, Martin Buss ·

This paper presents a new formulation for model-free robust optimal regulation of continuous-time nonlinear systems. The proposed reinforcement learning based approach, referred to as incremental adaptive dynamic programming (IADP), exploits measured data to allow the design of the approximate optimal incremental control strategy, which stabilizes the controlled system incrementally under model uncertainties, environmental disturbances, and input saturation. By leveraging the time delay estimation (TDE) technique, we first exploit sensory data to reduce the requirement of a complete dynamics, where measured data are adopted to construct an incremental dynamics that reflects the system evolution in an incremental form. Then, the resulting incremental dynamics serves to design the approximate optimal incremental control strategy based on adaptive dynamic programming, which is implemented as a simplified single critic structure to get the approximate solution to the value function of the Hamilton-Jacobi-Bellman equation. Furthermore, for the critic artificial neural network, experience data are used to design an off-policy weight update law with guaranteed weight convergence. Rather importantly, to address the unintentionally introduced TDE error, we incorporate a TDE error bound related term into the cost function, whereby the TDE error is attenuated during the optimization process. The system stability proof and the weight convergence proof are provided. Numerical simulations are conducted to validate the effectiveness and superiority of our proposed IADP, especially regarding the reduced control energy expenditure and the enhanced robustness.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here