IMPACT: Importance Weighted Asynchronous Architectures with Clipped Target Networks

The practical usage of reinforcement learning agents is often bottlenecked by the duration of training time. To accelerate training, practitioners often turn to distributed reinforcement learning architectures to parallelize and accelerate the training process... (read more)

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Sigmoid Activation
Activation Functions
Tanh Activation
Activation Functions
PPO
Policy Gradient Methods
V-trace
Value Function Estimation
Experience Replay
Replay Memory
Entropy Regularization
Regularization
Residual Connection
Skip Connections
Gradient Clipping
Optimization
RMSProp
Stochastic Optimization
ReLU
Activation Functions
Max Pooling
Pooling Operations
Convolution
Convolutions
LSTM
Recurrent Neural Networks
IMPALA
Policy Gradient Methods