Visual Odometry
97 papers with code • 0 benchmarks • 21 datasets
Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.
Source: Bi-objective Optimization for Robust RGB-D Visual Odometry
Benchmarks
These leaderboards are used to track progress in Visual Odometry
Libraries
Use these libraries to find Visual Odometry models and implementationsDatasets
Most implemented papers
Event-based Stereo Visual Odometry
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
Empty Cities: a Dynamic-Object-Invariant Space for Visual SLAM
The first challenge is addressed by the use of a convolutional network that learns a multi-class semantic segmentation of the image.
TartanVO: A Generalizable Learning-based VO
We present the first learning-based visual odometry (VO) model, which generalizes to multiple datasets and real-world scenarios and outperforms geometry-based methods in challenging scenes.
DF-VO: What Should Be Learnt for Visual Odometry?
More surprisingly, they show that the well-trained networks enable scale-consistent predictions over long videos, while the accuracy is still inferior to traditional methods because of ignoring geometric information.
Spatiotemporal Registration for Event-based Visual Odometry
The state-of-the-art method of contrast maximisation recovers the motion from a batch of events by maximising the contrast of the image of warped events.
Backprop KF: Learning Discriminative Deterministic State Estimators
We show that this procedure can be used to train state estimators that use complex input, such as raw camera images, which must be processed using expressive nonlinear function approximators such as convolutional neural networks.
gvnn: Neural Network Library for Geometric Computer Vision
We introduce gvnn, a neural network library in Torch aimed towards bridging the gap between classic geometric computer vision and deep learning.
Real-Time Panoramic Tracking for Event Cameras
In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom.
Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty
This work proposes a robust visual odometry method for structured environments that combines point features with line and plane segments, extracted through an RGB-D camera.
How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change
Direct visual localization has recently enjoyed a resurgence in popularity with the increasing availability of cheap mobile computing power.