DSAC: Distributional Soft Actor Critic for Risk-Sensitive Reinforcement Learning

30 Apr 2020  ·  Xiaoteng Ma, Li Xia, Zhengyuan Zhou, Jun Yang, Qianchuan Zhao ·

In this paper, we present a new reinforcement learning (RL) algorithm called Distributional Soft Actor Critic (DSAC), which exploits the distributional information of accumulated rewards to achieve better performance. Seamlessly integrating SAC (which uses entropy to encourage exploration) with a principled distributional view of the underlying objective, DSAC takes into consideration the randomness in both action and rewards, and beats the state-of-the-art baselines in several continuous control benchmarks. Moreover, with the distributional information of rewards, we propose a unified framework for risk-sensitive learning, one that goes beyond maximizing only expected accumulated rewards. Under this framework we discuss three specific risk-related metrics: percentile, mean-variance and distorted expectation. Our extensive experiments demonstrate that with distribution modeling in RL, the agent performs better for both risk-averse and risk-seeking control tasks.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods