Adaptive Discretization for Continuous Control using Particle Filtering Policy Network

16 Mar 2020 Pei Xu Ioannis Karamouzas

Controlling the movements of highly articulated agents and robots has been a long-standing challenge to model-free deep reinforcement learning. In this paper, we propose a simple, yet general, framework for improving the performance of policy gradient algorithms by discretizing the continuous action space... (read more)

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
DDPG
Policy Gradient Methods
Adam
Stochastic Optimization
Soft Actor-Critic (Autotuned Temperature)
Policy Gradient Methods
V-trace
Value Function Estimation
Entropy Regularization
Regularization
Experience Replay
Replay Memory
ReLU
Activation Functions
IMPALA
Policy Gradient Methods
PPO
Policy Gradient Methods
A2C
Policy Gradient Methods
A3C
Policy Gradient Methods