|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking.
We thus propose to jointly learn keypoint detection and description together with a predictor of the local descriptor discriminativeness.
Ranked #1 on Camera Localization on Aachen Day-Night benchmark
The most promising approach is inspired by reinforcement learning, namely to replace the deterministic hypothesis selection by a probabilistic selection for which we can derive the expected loss w. r. t.
Popular research areas like autonomous driving and augmented reality have renewed the interest in image-based camera localization.
In contrast, we learn hypothesis search in a principled fashion that lets us optimize an arbitrary task loss during training, leading to large improvements on classic computer vision tasks.
Ranked #1 on Horizon Line Estimation on Horizon Lines in the Wild
In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms.
In this work, we fit the 6D camera pose to a set of noisy correspondences between the 2D input image and a known 3D environment.