Browse SoTA > Robots > Visual Navigation

Visual Navigation

28 papers with code · Robots

Visual Navigation is the problem of navigating an agent, e.g. a mobile robot, in an environment using camera input only. The agent is given a target image (an image it will see from the target position), and its goal is to move from its current position to the target by applying a sequence of actions, based on the camera observations only.

Source: Vision-based Navigation Using Deep Reinforcement Learning

Benchmarks

Latest papers with code

Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning

6 Aug 2020ezliu/dream

In principle, meta-reinforcement learning approaches can exploit this shared structure, but in practice, they fail to adapt to new environments when adaptation requires targeted exploration (e. g., exploring the cabinets to find ingredients in a new kitchen).

META REINFORCEMENT LEARNING VISUAL NAVIGATION

5
06 Aug 2020

One-Shot Informed Robotic Visual Search in the Wild

22 Mar 2020rvl-lab-utoronto/visual_search_in_the_wild

In this paper we propose a method that enables informed visual navigation via a learned visual similarity operator that guides the robot's visual search towards parts of the scene that look like an exemplar image, which is given by the user as a high-level specification for data collection.

REPRESENTATION LEARNING ROBOT NAVIGATION VISUAL NAVIGATION

2
22 Mar 2020

Visual Navigation Among Humans with Optimal Control as a Supervisor

20 Mar 2020vtolani95/HumANav-Release

We propose a novel framework for navigation around humans which combines learning-based perception with model-based optimal control.

ROBOT NAVIGATION VISUAL NAVIGATION

4
20 Mar 2020

Sparse Graphical Memory for Robust Planning

13 Mar 2020scottemmons/sgm

We wish to combine the strengths of deep learning and classical planning to solve long-horizon tasks from raw sensory input.

IMITATION LEARNING VISUAL NAVIGATION

12
13 Mar 2020

Extending Maps with Semantic and Contextual Object Information for Robot Navigation: a Learning-Based Framework using Visual and Depth Cues

13 Mar 2020verlab/3d-object-semantic-mapping

The formulation is designed to identify and to disregard dynamic objects in order to obtain a medium-term invariant map representation.

ROBOT NAVIGATION SEMANTIC SEGMENTATION VISUAL NAVIGATION

11
13 Mar 2020

MVP: Unified Motion and Visual Self-Supervised Learning for Large-Scale Robotic Navigation

2 Mar 2020mchancan/citylearn

Our experimental results, on traversals of the Oxford RobotCar dataset with no GPS data, show that MVP can achieve 53% and 93% navigation success rate using VO and RO, respectively, compared to 7% for a vision-only method.

AUTONOMOUS NAVIGATION MOTION ESTIMATION SELF-SUPERVISED LEARNING VISUAL NAVIGATION VISUAL PLACE RECOGNITION

7
02 Mar 2020

Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training

CVPR 2020 weituo12321/PREVALENT

By training on a large amount of image-text-action triplets in a self-supervised learning manner, the pre-trained model provides generic representations of visual environments and language instructions.

SELF-SUPERVISED LEARNING VISUAL NAVIGATION

39
25 Feb 2020

Discriminative Particle Filter Reinforcement Learning for Complex Partial Observations

ICLR 2020 Yusufma03/DPFRL

The particle filter maintains a belief using learned discriminative update, which is trained end-to-end for decision making.

ATARI GAMES DECISION MAKING VISUAL NAVIGATION

8
23 Feb 2020

Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks

ICLR 2020 jozhang97/side-tuning

When training a neural network for a desired task, one may prefer to adapt a pre-trained network rather than starting from randomly initialized weights.

IMITATION LEARNING INCREMENTAL LEARNING QUESTION ANSWERING TRANSFER LEARNING VISUAL NAVIGATION

31
31 Dec 2019

Are We Making Real Progress in Simulated Environments? Measuring the Sim2Real Gap in Embodied Visual Navigation

13 Dec 2019facebookresearch/habitat-api

We find that SRCC for Habitat as used for the CVPR19 challenge is low (0. 18 for the success metric), which suggests that performance improvements for this simulator-based challenge would not transfer well to a physical robot.

POINTGOAL NAVIGATION VISUAL NAVIGATION

458
13 Dec 2019