REGTR: End-to-end Point Cloud Correspondences with Transformers

CVPR 2022  ·  Zi Jian Yew, Gim Hee Lee ·

Despite recent success in incorporating learning into point cloud registration, many works focus on learning feature descriptors and continue to rely on nearest-neighbor feature matching and outlier filtering through RANSAC to obtain the final set of correspondences for pose estimation. In this work, we conjecture that attention mechanisms can replace the role of explicit feature matching and RANSAC, and thus propose an end-to-end framework to directly predict the final set of correspondences. We use a network architecture consisting primarily of transformer layers containing self and cross attentions, and train it to predict the probability each point lies in the overlapping region and its corresponding position in the other point cloud. The required rigid transformation can then be estimated directly from the predicted correspondences without further post-processing. Despite its simplicity, our approach achieves state-of-the-art performance on 3DMatch and ModelNet benchmarks. Our source code can be found at https://github.com/yewzijian/RegTR .

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Point Cloud Registration 3DLoMatch (10-30% overlap) REGTR Recall ( correspondence RMSE below 0.2) 64.8 # 3
Point Cloud Registration 3DMatch (at least 30% overlapped - sample 5k interest points) REGTR Recall ( correspondence RMSE below 0.2) 92 # 2

Methods


No methods listed for this paper. Add relevant methods here