Quantized Reinforcement Learning (QUARL)

Deep reinforcement learning has achieved significant milestones, however, the computational demands of reinforcement learning training and inference remain substantial. Quantization is an effective method to reduce the computational overheads of neural networks, though in the context of reinforcement learning, it is unknown whether quantization's computational benefits outweigh the accuracy costs introduced by the corresponding quantization error... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Entropy Regularization
Regularization
Weight Decay
Regularization
Convolution
Convolutions
PPO
Policy Gradient Methods
Adam
Stochastic Optimization
Dense Connections
Feedforward Networks
Batch Normalization
Normalization
ReLU
Activation Functions
A2C
Policy Gradient Methods
Experience Replay
Replay Memory
DDPG
Policy Gradient Methods