Visual Localization
153 papers with code • 5 benchmarks • 20 datasets
Visual Localization is the problem of estimating the camera pose of a given image relative to a visual representation of a known scene.
Libraries
Use these libraries to find Visual Localization models and implementationsDatasets
Most implemented papers
Kimera: from SLAM to Spatial Perception with 3D Dynamic Scene Graphs
This mental model captures geometric and semantic aspects of the scene, describes the environment at multiple levels of abstractions (e. g., objects, rooms, buildings), includes static and dynamic entities and their relations (e. g., a person is in a room at a given time).
Learning Multi-Scene Absolute Pose Regression with Transformers
Absolute camera pose regressors estimate the position and orientation of a camera from the captured image alone.
PICCOLO: Point Cloud-Centric Omnidirectional Localization
Our loss function, called sampling loss, is point cloud-centric, evaluated at the projected location of every point in the point cloud.
HypLiLoc: Towards Effective LiDAR Pose Regression with Hyperbolic Fusion
LiDAR relocalization plays a crucial role in many fields, including robotics, autonomous driving, and computer vision.
GlueStick: Robust Image Matching by Sticking Points and Lines Together
Line segments are powerful features complementary to points.
LightGlue: Local Feature Matching at Light Speed
We introduce LightGlue, a deep neural network that learns to match local features across images.
3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection
To minimize the number of cameras needed for surround perception, we utilize fisheye cameras.
How to Train a CAT: Learning Canonical Appearance Transformations for Direct Visual Localization Under Illumination Change
Direct visual localization has recently enjoyed a resurgence in popularity with the increasing availability of cheap mobile computing power.
DPC-Net: Deep Pose Correction for Visual Localization
We use this loss to train a Deep Pose Correction network (DPC-Net) that predicts corrections for a particular estimator, sensor and environment.
Visual Servoing of Unmanned Surface Vehicle from Small Tethered Unmanned Aerial Vehicle
The motor schema proposed, uses the USVs coordinates from the visual localization subsystem to control the UAVs camera movements and track the USV with minimal camera movements such that the USV is always in the cameras field of view.