Self-Learning Exploration and Mapping for Mobile Robots via Deep Reinforcement Learning

6 Jan 2019  ·  Fanfei Chen, Shi Bai, Tixiao Shan, Brendan Englot ·

Mapping and exploration of a priori unknown environments is a crucial capability for mobile robot autonomy. A state-of-the-art approach for mobile robots equipped with range sensors uses mutual information as the basis for a cost metric, and reasons about how much information gain is associated with each action a robot can take while constructing an occupancy map from its range measurements. However, the computational cost of such an optimization scales poorly as the number of potential robot actions increases. We propose a novel approach to utilize the local structure of the environment while predicting a robot's optimal sensing action using Deep Reinforcement Learning (DRL). The learned exploration policy can select an optimal or near-optimal exploratory sensing action with improved computational efficiency. Our computational results demonstrate that the proposed method provides both efficiency and accuracy in choosing informative sensing actions.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here