Visual Odometry

99 papers with code • 1 benchmarks • 23 datasets

Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.

Source: Bi-objective Optimization for Robust RGB-D Visual Odometry

Libraries

Use these libraries to find Visual Odometry models and implementations

Most implemented papers

Sparse Representations for Object and Ego-motion Estimation in Dynamic Scenes

hkashyap/SparseMotion 9 Mar 2019

Dynamic scenes that contain both object motion and egomotion are a challenge for monocular visual odometry (VO).

MAVNet: an Effective Semantic Segmentation Micro-Network for MAV-based Tasks

tynguyen/MAVNet 3 Apr 2019

Real-time semantic image segmentation on platforms subject to size, weight and power (SWaP) constraints is a key area of interest for air surveillance and inspection.

Continuous Direct Sparse Visual Odometry from RGB-D Images

MaaniGhaffari/cvo-rgbd 3 Apr 2019

This paper reports on a novel formulation and evaluation of visual odometry from RGB-D images.

Recurrent Neural Network for (Un-)supervised Learning of Monocular VideoVisual Odometry and Depth

wrlife/RNN_depth_pose 15 Apr 2019

Deep learning-based, single-view depth estimation methods have recently shown highly promising results.

Extending Monocular Visual Odometry to Stereo Camera Systems by Scale Optimization

jiawei-mo/scale_optimization 29 May 2019

This paper proposes a novel approach for extending monocular visual odometry to a stereo camera system.

UnOS: Unified Unsupervised Optical-Flow and Stereo-Depth Estimation by Watching Videos

baidu-research/UnDepthflow CVPR 2019

In this paper, we propose UnOS, an unified system for unsupervised optical flow and stereo depth estimation using convolutional neural network (CNN) by taking advantages of their inherent geometrical consistency based on the rigid-scene assumption.

DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing

elnino9ykl/DS-PASS 17 Sep 2019

Semantically interpreting the traffic scene is crucial for autonomous transportation and robotics systems.

Adaptive Continuous Visual Odometry from RGB-D Images

MaaniGhaffari/cvo-rgbd 1 Oct 2019

In this paper, we extend the recently developed continuous visual odometry framework for RGB-D cameras to an adaptive framework via online hyperparameter learning.

SimVODIS: Simultaneous Visual Odometry, Object Detection, and Instance Segmentation

Uehwan/SimVODIS 14 Nov 2019

Intelligent agents need to understand the surrounding environment to provide meaningful services to or interact intelligently with humans.

A Keyframe-based Continuous Visual SLAM for RGB-D Cameras via Nonparametric Joint Geometric and Appearance Representation

perl-sw/cvo-slam 2 Dec 2019

The experimental evaluations using publicly available RGB-D benchmarks show that the developed keyframe selection technique using continuous visual odometry outperforms its robust dense (and direct) visual odometry equivalent.