Universal Successor Features Based Deep Reinforcement Learning for Navigation

This thesis outlines a research work on modeling a novel approach for robot navigation by using an advanced Deep Reinforcement Learning (DRL) algorithm. Visual navigation is a core problem in robotics and machine vision. Previous research used map-based, map-building, or map-less navigation strategies. The first two approaches were favored in the past. However, they essentially depend on the accurate mapping of the environment and a careful human-guided training phase, which overall limits generalizability. With recent developments in DRL, mapless navigation experienced major advancements. A current challenge for DRL algorithms is learning new tasks or goals. This ability is called transfer learning. To cope with the challenges in transfer learning and performance, we present a new approach using Universal Successor Features (USF) in this thesis. We propose several models that we applied for the task of target-driven visual navigation in a complex photo-realistic environment using a simulator named AI2THOR. With the evaluation of our proposed models in AI2THOR, we demonstrate that an agent is able to successfully improve the ability to reach goals that the agent was initially not trained for.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here