Safe Exploration in Reinforcement Learning: Training Backup Control Barrier Functions with Zero Training Time Safety Violations

13 Dec 2023  ·  Pedram Rabiee, Amirsaeid Safari ·

Safe reinforcement learning (RL) aims to satisfy safety constraints during training. However, guaranteeing safety during training remained a challenging problem. This paper presents a novel framework that integrates Backup Control Barrier Functions (BCBFs) with reinforcement learning (RL) to enable safe exploration called RLBUS: Reinforcement Learning Backup Shield. BCBFs incorporate backup controllers that predict a system's finite-time response, facilitating online optimization of a control policy that maintains the forward invariance of a safe subset, while satisfying actuator constraints. Building on the soft-minimum/soft-maximum CBF method from prior work, which ensures feasibility and continuity of the BCBF with multiple backup controllers, this paper proposes integrating these BCBFs with RL. This framework leverages RL to learn a better backup policy to enlarge the forward invariant set, while guaranteeing safety during training. By combining backup controllers and RL, the approach provides safety and feasibility guarantees during training and enables safe online exploration with zero training-time safety violations. The method is demonstrated on an inverted pendulum example, where expanding the forward invariant set through RL allows the pendulum to safely explore larger regions of state space.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here