A New Approach for Tactical Decision Making in Lane Changing: Sample Efficient Deep Q Learning with a Safety Feedback Reward

24 Sep 2020 M. Ugur Yavas N. Kemal Ure Tufan Kumbasar

Automated lane change is one of the most challenging task to be solved of highly automated vehicles due to its safety-critical, uncertain and multi-agent nature. This paper presents the novel deployment of the state of art Q learning method, namely Rainbow DQN, that uses a new safety driven rewarding scheme to tackle the issues in an dynamic and uncertain simulation environment... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Convolution
Convolutions
Q-Learning
Off-Policy TD Control
Dense Connections
Feedforward Networks
N-step Returns
Value Function Estimation
Noisy Linear Layer
Randomized Value Functions
Double Q-learning
Off-Policy TD Control
DQN
Q-Learning Networks
Dueling Network
Q-Learning Networks
Prioritized Experience Replay
Replay Memory
Rainbow DQN
Q-Learning Networks