Proximal Policy Optimization Algorithms

20 Jul 2017  ·  John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov ·

We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Continuous Control Lunar Lander (OpenAI Gym) PPO Score 175.14±44.94 # 4

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Neural Architecture Search NATS-Bench Topology, CIFAR-10 PPO (Schulman et al., 2017) Test Accuracy 94.02 # 3
Neural Architecture Search NATS-Bench Topology, CIFAR-100 PPO (Schulman et al., 2017) Test Accuracy 71.68 # 2
Neural Architecture Search NATS-Bench Topology, ImageNet16-120 PPO (Schulman et al., 2017) Test Accuracy 44.95 # 3

Methods