Browse SoTA > Robots > Visual Navigation

Visual Navigation

28 papers with code ยท Robots

Visual Navigation is the problem of navigating an agent, e.g. a mobile robot, in an environment using camera input only. The agent is given a target image (an image it will see from the target position), and its goal is to move from its current position to the target by applying a sequence of actions, based on the camera observations only.

Source: Vision-based Navigation Using Deep Reinforcement Learning

Benchmarks

Latest papers without code

Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial Observability in Visual Navigation

27 Jul 2020

Reinforcement Learning (RL), among other learning-based methods, represents powerful tools to solve complex robotic tasks (e. g., actuation, manipulation, navigation, etc.

VISUAL NAVIGATION

Learning Object Relation Graph and Tentative Policy for Visual Navigation

21 Jul 2020

Aiming to improve these two components, this paper proposes three complementary techniques, object relation graph (ORG), trial-driven imitation learning (IL), and a memory-augmented tentative policy network (TPN).

IMITATION LEARNING REPRESENTATION LEARNING VISUAL NAVIGATION

Virtual Testbed for Monocular Visual Navigation of Small Unmanned Aircraft Systems

1 Jul 2020

Monocular visual navigation methods have seen significant advances in the last decade, recently producing several real-time solutions for autonomously navigating small unmanned aircraft systems without relying on GPS.

MONOCULAR VISUAL ODOMETRY VISUAL NAVIGATION

Semantic Visual Navigation by Watching YouTube Videos

17 Jun 2020

This paper learns and leverages such semantic cues for navigating to objects of interest in novel environments, by simply watching YouTube videos.

Q-LEARNING VISUAL NAVIGATION

DeepRelativeFusion: Dense Monocular SLAM using Single-Image Relative Depth Prediction

7 Jun 2020

Despite the absence of absolute scale and depth range, the relative depth maps can be corrected using their respective semi-dense depth maps from the SLAM algorithm.

DEPTH ESTIMATION SIMULTANEOUS LOCALIZATION AND MAPPING VISUAL NAVIGATION

Unsupervised Reinforcement Learning of Transferable Meta-Skills for Embodied Navigation

CVPR 2020

Visual navigation is a task of training an embodied agent by intelligently navigating to a target object (e. g., television) using only visual observations.

VISUAL NAVIGATION

Neural Topological SLAM for Visual Navigation

CVPR 2020

This paper studies the problem of image-goal navigation which involves navigating to the location indicated by a goal image in a novel previously unseen environment.

VISUAL NAVIGATION

Neural Topological SLAM for Visual Navigation

CVPR 2020

This paper studies the problem of image-goal navigation which involves navigating to the location indicated by a goal image in a novel previously unseen environment.

VISUAL NAVIGATION

Dynamic Value Estimation for Single-Task Multi-Scene Reinforcement Learning

25 May 2020

Training deep reinforcement learning agents on environments with multiple levels / scenes / conditions from the same task, has become essential for many applications aiming to achieve generalization and domain transfer from simulation to the real world.

VISUAL NAVIGATION

VisualEchoes: Spatial Image Representation Learning through Echolocation

4 May 2020

Several animal species (e. g., bats, dolphins, and whales) and even visually impaired humans have the remarkable ability to perform echolocation: a biological sonar used to perceive spatial layout and locate objects in the world.

MONOCULAR DEPTH ESTIMATION REPRESENTATION LEARNING VISUAL NAVIGATION