Visual Odometry
99 papers with code • 1 benchmarks • 23 datasets
Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.
Source: Bi-objective Optimization for Robust RGB-D Visual Odometry
Libraries
Use these libraries to find Visual Odometry models and implementationsDatasets
Most implemented papers
Sparse Representations for Object and Ego-motion Estimation in Dynamic Scenes
Dynamic scenes that contain both object motion and egomotion are a challenge for monocular visual odometry (VO).
MAVNet: an Effective Semantic Segmentation Micro-Network for MAV-based Tasks
Real-time semantic image segmentation on platforms subject to size, weight and power (SWaP) constraints is a key area of interest for air surveillance and inspection.
Continuous Direct Sparse Visual Odometry from RGB-D Images
This paper reports on a novel formulation and evaluation of visual odometry from RGB-D images.
Recurrent Neural Network for (Un-)supervised Learning of Monocular VideoVisual Odometry and Depth
Deep learning-based, single-view depth estimation methods have recently shown highly promising results.
Extending Monocular Visual Odometry to Stereo Camera Systems by Scale Optimization
This paper proposes a novel approach for extending monocular visual odometry to a stereo camera system.
UnOS: Unified Unsupervised Optical-Flow and Stereo-Depth Estimation by Watching Videos
In this paper, we propose UnOS, an unified system for unsupervised optical flow and stereo depth estimation using convolutional neural network (CNN) by taking advantages of their inherent geometrical consistency based on the rigid-scene assumption.
DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing
Semantically interpreting the traffic scene is crucial for autonomous transportation and robotics systems.
Adaptive Continuous Visual Odometry from RGB-D Images
In this paper, we extend the recently developed continuous visual odometry framework for RGB-D cameras to an adaptive framework via online hyperparameter learning.
SimVODIS: Simultaneous Visual Odometry, Object Detection, and Instance Segmentation
Intelligent agents need to understand the surrounding environment to provide meaningful services to or interact intelligently with humans.
A Keyframe-based Continuous Visual SLAM for RGB-D Cameras via Nonparametric Joint Geometric and Appearance Representation
The experimental evaluations using publicly available RGB-D benchmarks show that the developed keyframe selection technique using continuous visual odometry outperforms its robust dense (and direct) visual odometry equivalent.