Visual Odometry
98 papers with code • 1 benchmarks • 22 datasets
Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.
Source: Bi-objective Optimization for Robust RGB-D Visual Odometry
Libraries
Use these libraries to find Visual Odometry models and implementationsDatasets
Most implemented papers
The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM
New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array.
Fast Cylinder and Plane Extraction from Depth Cameras for Visual Odometry
This paper presents CAPE, a method to extract planes and cylinder segments from organized point clouds, which processes 640x480 depth images on a single CPU core at an average of 300 Hz, by operating on a grid of planar cells.
CNN-SVO: Improving the Mapping in Semi-Direct Visual Odometry Using Single-Image Depth Prediction
Reliable feature correspondence between frames is a critical step in visual odometry (VO) and visual simultaneous localization and mapping (V-SLAM) algorithms.
Edge-Direct Visual Odometry
In contrast our method builds on direct visual odometry methods naturally with minimal added computation.
Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video
To the best of our knowledge, this is the first work to show that deep networks trained using unlabelled monocular videos can predict globally scale-consistent camera trajectories over a long video sequence.
A Fast and Robust Place Recognition Approach for Stereo Visual Odometry Using LiDAR Descriptors
Place recognition is a core component of Simultaneous Localization and Mapping (SLAM) algorithms.
Visual Odometry Revisited: What Should Be Learnt?
In this work we present a monocular visual odometry (VO) algorithm which leverages geometry-based methods and deep learning.
Neural Outlier Rejection for Self-Supervised Keypoint Learning
By making the sampling of inlier-outlier sets from point-pair correspondences fully differentiable within the keypoint learning framework, we show that are able to simultaneously self-supervise keypoint description and improve keypoint matching.
Nonparametric Continuous Sensor Registration
The functions can be defined on arbitrary smooth manifolds where the action of a Lie group aligns them.
Robust Ego and Object 6-DoF Motion Estimation and Tracking
The problem of tracking self-motion as well as motion of objects in the scene using information from a camera is known as multi-body visual odometry and is a challenging task.