On the Guarantees of Minimizing Regret in Receding Horizon

26 Jun 2023  ·  Andrea Martin, Luca Furieri, Florian Dörfler, John Lygeros, Giancarlo Ferrari-Trecate ·

Towards bridging classical optimal control and online learning, regret minimization has recently been proposed as a control design criterion. This competitive paradigm penalizes the loss relative to the optimal control actions chosen by a clairvoyant policy, and allows tracking the optimal performance in hindsight no matter how disturbances are generated. In this paper, we propose the first receding horizon scheme based on the repeated computation of finite horizon regret-optimal policies, and we establish stability and safety guarantees for the resulting closed-loop system. Our derivations combine novel monotonicity properties of clairvoyant policies with suitable terminal ingredients. We prove that our scheme is recursively feasible, stabilizing, and that it achieves bounded regret relative to the infinite horizon clairvoyant policy. Last, we show that the policy optimization problem can be solved efficiently through convex-concave programming. Our numerical experiments show that minimizing regret can outperform standard receding horizon approaches when the disturbances poorly fit classical design assumptions - even when the finite horizon planning is recomputed less frequently.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here