Visual Odometry

99 papers with code • 1 benchmarks • 23 datasets

Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.

Source: Bi-objective Optimization for Robust RGB-D Visual Odometry

Libraries

Use these libraries to find Visual Odometry models and implementations

Most implemented papers

DPC-Net: Deep Pose Correction for Visual Localization

utiasSTARS/dpc-net 10 Sep 2017

We use this loss to train a Deep Pose Correction network (DPC-Net) that predicts corrections for a particular estimator, sensor and environment.

Learning Depth from Monocular Videos using Direct Methods

MightyChaos/LKVOLearner CVPR 2018

The ability to predict depth from a single image - using recent advances in CNNs - is of increasing interest to the vision community.

SalientDSO: Bringing Attention to Direct Sparse Odometry

prgumd/SalientDSO 28 Feb 2018

We merge the successes of these two communities and present a way to incorporate semantic information in the form of visual saliency to Direct Sparse Odometry - a highly successful direct sparse VO algorithm.

Deep Auxiliary Learning for Visual Localization and Odometry

decayale/vlocnet 9 Mar 2018

We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation.

Unsupervised Learning of Monocular Depth Estimation and Visual Odometry with Deep Feature Reconstruction

Huangying-Zhan/Depth-VO-Feat CVPR 2018

Despite learning based methods showing promising results in single view depth estimation and visual odometry, most existing approaches treat the tasks in a supervised manner.

SIPs: Succinct Interest Points from Unsupervised Inlierness Probability Learning

uzh-rpg/sips2_open 3 May 2018

In certain cases, our detector is able to obtain an equivalent amount of inliers with as little as 60% of the amount of points of other detectors.

Network Uncertainty Informed Semantic Feature Selection for Visual SLAM

navganti/SIVO 29 Nov 2018

In order to facilitate long-term localization using a visual simultaneous localization and mapping (SLAM) algorithm, careful feature selection can help ensure that reference points persist over long durations and the runtime and storage complexity of the algorithm remain consistent.

A Generative Map for Image-based Camera Localization

Mingpan/generative_map 18 Feb 2019

For localization, we show that Generative Map achieves comparable performance with current regression models.

Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation

hlzz/DeepMatchVO 25 Feb 2019

Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM).

ROVO: Robust Omnidirectional Visual Odometry for Wide-baseline Wide-FOV Camera Systems

renmengqisheng/stereo_multifisheye 28 Feb 2019

For more robust and accurate ego-motion estimation we adds three components to the standard VO pipeline, 1) the hybrid projection model for improved feature matching, 2) multi-view P3P RANSAC algorithm for pose estimation, and 3) online update of rig extrinsic parameters.