Browse SoTA > Robots > Visual Navigation

Visual Navigation

34 papers with code · Robots

Visual Navigation is the problem of navigating an agent, e.g. a mobile robot, in an environment using camera input only. The agent is given a target image (an image it will see from the target position), and its goal is to move from its current position to the target by applying a sequence of actions, based on the camera observations only.

Source: Vision-based Navigation Using Deep Reinforcement Learning

Benchmarks

Greatest papers with code

Visual Representations for Semantic Target Driven Navigation

15 May 2018tensorflow/models

We propose to using high level semantic and contextual features including segmentation and detection masks obtained by off-the-shelf state-of-the-art vision as observations and use deep network to learn the navigation policy.

DOMAIN ADAPTATION VISUAL NAVIGATION

Cognitive Mapping and Planning for Visual Navigation

CVPR 2017 tensorflow/models

The accumulated belief of the world enables the agent to track visited regions of the environment.

VISUAL NAVIGATION

Sim2Real Predictivity: Does Evaluation in Simulation Predict Real-World Performance?

13 Dec 2019facebookresearch/habitat-api

Second, we investigate the sim2real predictivity of Habitat-Sim for PointGoal navigation.

POINTGOAL NAVIGATION VISUAL NAVIGATION

An Open Source and Open Hardware Deep Learning-powered Visual Navigation Engine for Autonomous Nano-UAVs

10 May 2019pulp-platform/pulp-dronet

Nano-size unmanned aerial vehicles (UAVs), with few centimeters of diameter and sub-10 Watts of total power budget, have so far been considered incapable of running sophisticated visual-based autonomous navigation software without external aid from base-stations, ad-hoc local positioning infrastructure, and powerful external computation servers.

AUTONOMOUS NAVIGATION VISUAL NAVIGATION

A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

4 May 2018pulp-platform/pulp-dronet

As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average.

AUTONOMOUS NAVIGATION VISUAL NAVIGATION

Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning

6 Aug 2020maximecb/gym-miniworld

In principle, meta-reinforcement learning approaches can exploit this shared structure, but in practice, they fail to adapt to new environments when adaptation requires targeted exploration (e. g., exploring the cabinets to find ingredients in a new kitchen).

META REINFORCEMENT LEARNING VISUAL NAVIGATION

Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments

CVPR 2018 peteanderson80/Matterport3DSimulator

This is significant because a robot interpreting a natural-language navigation instruction on the basis of what it sees is carrying out a vision and language process that is similar to Visual Question Answering.

VISUAL NAVIGATION VISUAL QUESTION ANSWERING

Learning to Learn How to Learn: Self-Adaptive Visual Navigation Using Meta-Learning

CVPR 2019 allenai/savn

In this paper we study the problem of learning to learn at both training and test time in the context of visual navigation.

META-LEARNING META REINFORCEMENT LEARNING VISUAL NAVIGATION