Learning Soft Constrained MPC Value Functions: Efficient MPC Design and Implementation providing Stability and Safety Guarantees

Model Predictive Control (MPC) can be applied to safety-critical control problems, providing closed-loop safety and performance guarantees. Implementation of MPC controllers requires solving an optimization problem at every sampling instant, which is challenging to execute on embedded hardware. To address this challenge, we propose a framework that combines a tightened soft constrained MPC formulation with supervised learning to approximate the MPC value function. This combination enables us to obtain a corresponding optimal control law, which can be implemented efficiently on embedded platforms. The framework ensures stability and constraint satisfaction for various nonlinear systems. While the design effort is similar to that of nominal MPC, the proposed formulation provides input-to-state stability (ISS) with respect to the approximation error of the value function. Furthermore, we prove that the value function corresponding to the soft constrained MPC problem is Lipschitz continuous for Lipschitz continuous systems, even if the optimal control law may be discontinuous. This serves two purposes: First, it allows to relate approximation errors to a sufficiently large constraint tightening to obtain constraint satisfaction guarantees. Second, it paves the way for an efficient supervised learning procedure to obtain a continuous value function approximation. We demonstrate the effectiveness of the method using a nonlinear numerical example.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here