DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization

ECCV 2020  ·  Juan Du, Rui Wang, Daniel Cremers ·

For relocalization in large-scale point clouds, we propose the first approach that unifies global place recognition and local 6DoF pose refinement. To this end, we design a Siamese network that jointly learns 3D local feature detection and description directly from raw 3D points. It integrates FlexConv and Squeeze-and-Excitation (SE) to assure that the learned local descriptor captures multi-level geometric information and channel-wise relations. For detecting 3D keypoints we predict the discriminativeness of the local descriptors in an unsupervised manner. We generate the global descriptor by directly aggregating the learned local descriptors with an effective attention mechanism. In this way, local and global 3D descriptors are inferred in one single forward pass. Experiments on various benchmarks demonstrate that our method achieves competitive results for both global point cloud retrieval and local point cloud registration in comparison to state-of-the-art approaches. To validate the generalizability and robustness of our 3D keypoints, we demonstrate that our method also performs favorably without fine-tuning on the registration of point clouds that were generated by a visual SLAM system. Code and related materials are available at https://vision.in.tum.de/research/vslam/dh3d.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Place Recognition Oxford RobotCar Dataset DH3D AR@1 74.2 # 7
AR@1% 85.3 # 8
Point Cloud Retrieval Oxford RobotCar (LiDAR 4096 points) DH3D-4096 (baseline) (only global desc.) recall@top1% 84.26 # 21
recall@top1 73.28 # 17

Methods