Visual Localization
151 papers with code • 5 benchmarks • 20 datasets
Visual Localization is the problem of estimating the camera pose of a given image relative to a visual representation of a known scene.
Libraries
Use these libraries to find Visual Localization models and implementationsDatasets
Most implemented papers
AdaLAM: Revisiting Handcrafted Outlier Detection
Local feature matching is a critical component of many computer vision pipelines, including among others Structure-from-Motion, SLAM, and Visual Localization.
Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition
Visual Place Recognition is a challenging task for robotics and autonomous systems, which must deal with the twin problems of appearance and viewpoint change in an always changing world.
CrossLoc: Scalable Aerial Localization Assisted by Multimodal Synthetic Data
We present a visual localization system that learns to estimate camera poses in the real world with the help of synthetic data.
Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds.
Particle Filter Networks with Application to Visual Localization
Particle filtering is a powerful approach to sequential state estimation and finds application in many domains, including robot localization, object tracking, etc.
G2D: from GTA to Data
This document describes G2D, a software that enables capturing videos from Grand Theft Auto V (GTA V), a popular role playing game set in an expansive virtual city.
Panoramic Annular Localizer: Tackling the Variation Challenges of Outdoor Localization Using Panoramic Annular Images and Active Deep Descriptors
The panoramic annular images captured by the single camera are processed and fed into the NetVLAD network to form the active deep descriptor, and sequential matching is utilized to generate the localization result.
CMRNet++: Map and Camera Agnostic Monocular Visual Localization in LiDAR Maps
In this paper, we now take it a step further by introducing CMRNet++, which is a significantly more robust model that not only generalizes to new places effectively, but is also independent of the camera parameters.
Robust Image Retrieval-based Visual Localization using Kapture
To demonstrate this, we present a versatile pipeline for visual localization that facilitates the use of different local and global features, 3D data (e. g. depth maps), non-vision sensor data (e. g. IMU, GPS, WiFi), and various processing algorithms.
VR-Caps: A Virtual Environment for Capsule Endoscopy
Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions.