CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training

Pre-training across 3D vision and language remains under development because of limited training data. Recent works attempt to transfer vision-language pre-training models to 3D vision. PointCLIP converts point cloud data to multi-view depth maps, adopting CLIP for shape classification. However, its performance is restricted by the domain gap between rendered depth maps and images, as well as the diversity of depth distributions. To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification. We introduce a new depth rendering setting that forms a better visual effect, and then render 52,460 pairs of images and depth maps from ShapeNet for pre-training. The pre-training scheme of CLIP2Point combines cross-modality learning to enforce the depth features for capturing expressive visual and textual features and intra-modality learning to enhance the invariance of depth aggregation. Additionally, we propose a novel Dual-Path Adapter (DPA) module, i.e., a dual-path structure with simplified adapters for few-shot learning. The dual-path structure allows the joint use of CLIP and CLIP2Point, and the simplified adapter can well fit few-shot tasks without post-search. Experimental results show that CLIP2Point is effective in transferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP and other self-supervised 3D networks, achieving state-of-the-art results on zero-shot and few-shot classification.

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-Shot Transfer 3D Point Cloud Classification ModelNet10 CLIP2Point Accuracy (%) 66.63 # 3
Training-free 3D Point Cloud Classification ModelNet40 CLIP2Point Accuracy (%) 49.4 # 4
Need 3D Data? Yes # 1
Zero-Shot Transfer 3D Point Cloud Classification ModelNet40 CLIP2Point Accuracy (%) 49.38 # 10
Zero-shot 3D Point Cloud Classification ScanNetV2 CLIP2Point Top 1 Accuracy % 24.9 # 6
Zero-shot 3D Point Cloud Classification ScanNetV2 CLIP2Point w/ TP. Top 1 Accuracy % 35.2 # 4
Zero-Shot Transfer 3D Point Cloud Classification ScanObjectNN CLIP2Point PB_T50_RS Accuracy (%) 23.32 # 3
OBJ_BG Accuracy(%) 35.46 # 3
OBJ_ONLY Accuracy(%) 30.46 # 7
Training-free 3D Point Cloud Classification ScanObjectNN CLIP2Point Accuracy (%) 23.2 # 3
Need 3D Data? Yes # 1

Methods