Stochastic MPC with Dynamic Feedback Gain Selection and Discounted Probabilistic Constraints

14 Jul 2020  ·  Shuhao Yan, Paul J. Goulart, Mark Cannon ·

This paper considers linear discrete-time systems with additive disturbances, and designs a Model Predictive Control (MPC) law incorporating a dynamic feedback gain to minimise a quadratic cost function subject to a single chance constraint. The feedback gain is selected online and we provide two selection methods based on minimising upper bounds on predicted costs. The chance constraint is defined as a discounted sum of violation probabilities on an infinite horizon. By penalising violation probabilities close to the initial time and assigning violation probabilities in the far future with vanishingly small weights, this form of constraints allows for an MPC law with guarantees of recursive feasibility without a boundedness assumption on the disturbance. A computationally convenient MPC optimisation problem is formulated using Chebyshev's inequality and we introduce an online constraint-tightening technique to ensure recursive feasibility. The closed loop system is guaranteed to satisfy the chance constraint and a quadratic stability condition. With dynamic feedback gain selection, the closed loop cost is reduced and conservativeness of Chebyshev's inequality is mitigated. Also, a larger feasible set of initial conditions can be obtained. Numerical simulations are given to show these results.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here