Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast

31 May 2023  ยท  Guofan Fan, Zekun Qi, Wenkai Shi, Kaisheng Ma ยท

Geometry and color information provided by the point clouds are both crucial for 3D scene understanding. Two pieces of information characterize the different aspects of point clouds, but existing methods lack an elaborate design for the discrimination and relevance. Hence we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Specifically, we propose a universal 3D scene pre-training framework via Geometry-Color Contrast (Point-GCC), which aligns geometry and color information using a Siamese network. To take care of actual application tasks, we design (i) hierarchical supervision with point-level contrast and reconstruct and object-level contrast based on the novel deep clustering module to close the gap between pre-training and downstream tasks; (ii) architecture-agnostic backbone to adapt for various downstream models. Benefiting from the object-level representation associated with downstream tasks, Point-GCC can directly evaluate model performance and the result demonstrates the effectiveness of our methods. Transfer learning results on a wide range of tasks also show consistent improvements across all datasets. e.g., new state-of-the-art object detection results on SUN RGB-D and S3DIS datasets. Codes will be released at https://github.com/Asterisci/Point-GCC.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Object Detection S3DIS Point-GCC+TR3D mAP@0.5 56.7 # 1
mAP@0.25 75.1 # 1
Unsupervised 3D Semantic Segmentation ScanNetV2 Point-GCC+PointNet++ mIoU 18.3 # 1
3D Object Detection ScanNetV2 Point-GCC+TR3D mAP@0.25 73.1 # 7
mAP@0.5 59.6 # 7
3D Object Detection SUN-RGBD val Point-GCC+TR3D+FF mAP@0.25 69.7 # 1
mAP@0.5 54.0 # 1
3D Object Detection SUN-RGBD val Point-GCC+TR3D mAP@0.25 67.7 # 4
mAP@0.5 51.0 # 5

Methods


No methods listed for this paper. Add relevant methods here