Paper

Reinforcement learning with world model

Nowadays, model-free reinforcement learning algorithms have achieved remarkable performance on many decision making and control tasks, but high sample complexity and low sample efficiency still hinder the wide use of model-free reinforcement learning algorithms. In this paper, we argue that if we intend to design an intelligent agent that learns fast and transfers well, the agent must be able to reflect key elements of intelligence, like intuition, Memory, PredictionandCuriosity. We propose an agent framework that integrates off-policy reinforcement learning with world model learning, so as to embody the important features of intelligence in our algorithm design. We adopt the state-of-art model-free reinforcement learning algorithm, Soft Actor-Critic, as the agent intuition, and world model learning through RNN to endow the agent with memory, curiosity, and the ability to predict. We show that these ideas can work collaboratively with each other and our agent (RMC) can give new state-of-art results while maintaining sample efficiency and training stability. Moreover, our agent framework can be easily extended from MDP to POMDP problems without performance loss.

Results in Papers With Code
(↓ scroll down to see all results)