Paper

Safe Distributional Reinforcement Learning

Safety in reinforcement learning (RL) is a key property in both training and execution in many domains such as autonomous driving or finance. In this paper, we formalize it with a constrained RL formulation in the distributional RL setting. Our general model accepts various definitions of safety(e.g., bounds on expected performance, CVaR, variance, or probability of reaching bad states). To ensure safety during learning, we extend a safe policy optimization method to solve our problem. The distributional RL perspective leads to a more efficient algorithm while additionally catering for natural safe constraints. We empirically validate our propositions on artificial and real domains against appropriate state-of-the-art safe RL algorithms.

Results in Papers With Code
(↓ scroll down to see all results)