Sculpting Holistic 3D Representation in Contrastive Language-Image-3D Pre-training

3 Nov 2023  ·  Yipeng Gao, Zeyu Wang, Wei-Shi Zheng, Cihang Xie, Yuyin Zhou ·

Contrastive learning has emerged as a promising paradigm for 3D open-world understanding, i.e., aligning point cloud representation to image and text embedding space individually. In this paper, we introduce MixCon3D, a simple yet effective method aiming to sculpt holistic 3D representation in contrastive language-image-3D pre-training. In contrast to point cloud only, we develop the 3D object-level representation from complementary perspectives, e.g., multi-view rendered images with the point cloud. Then, MixCon3D performs language-3D contrastive learning, comprehensively depicting real-world 3D objects and bolstering text alignment. Additionally, we pioneer the first thorough investigation of various training recipes for the 3D contrastive learning paradigm, building a solid baseline with improved performance. Extensive experiments conducted on three representative benchmarks reveal that our method significantly improves over the baseline, surpassing the previous state-of-the-art performance on the challenging 1,156-category Objaverse-LVIS dataset by 5.7%. The versatility of MixCon3D is showcased in applications such as text-to-3D retrieval and point cloud captioning, further evidencing its efficacy in diverse scenarios. The code is available at https://github.com/UCSC-VLAA/MixCon3D.

PDF Abstract

Results from the Paper


 Ranked #1 on Zero-shot 3D classification on Objaverse LVIS (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-Shot Transfer 3D Point Cloud Classification ModelNet40 MixCon3D-PointBERT Accuracy (%) 86.8 # 4
Zero-shot 3D classification Objaverse LVIS MixCon3D (Merge) Top 1 Accuracy 55.3 # 1
Zero-Shot Transfer 3D Point Cloud Classification ScanObjectNN MixCon3D-PointBERT OBJ_ONLY Accuracy(%) 58.6 # 4

Methods