Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations

A deep reinforcement learning (DRL) agent observes its states through observations, which may contain natural measurement errors or adversarial noises. Since the observations deviate from the true states, they can mislead the agent into making suboptimal actions... (read more)

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Entropy Regularization
Regularization
PPO
Policy Gradient Methods
Experience Replay
Replay Memory
Weight Decay
Regularization
ReLU
Activation Functions
Q-Learning
Off-Policy TD Control
Adam
Stochastic Optimization
Batch Normalization
Normalization
DDPG
Policy Gradient Methods
Dense Connections
Feedforward Networks
Convolution
Convolutions
DQN
Q-Learning Networks