3D Point Cloud Classification
123 papers with code • 5 benchmarks • 6 datasets
Image: Qi et al
Libraries
Use these libraries to find 3D Point Cloud Classification models and implementationsSubtasks
Latest papers with no code
A Benchmark Grocery Dataset of Realworld Point Clouds From Single View
Existing datasets on groceries are mainly 2D images.
PointMoment:Mixed-Moment-based Self-Supervised Representation Learning for 3D Point Clouds
Large and rich data is a prerequisite for effective training of deep neural networks.
Test-Time Augmentation for 3D Point Cloud Classification and Segmentation
We are inspired by the recent revolution of learning implicit representation and point cloud upsampling, which can produce high-quality 3D surface reconstruction and proximity-to-surface, respectively.
OmniVec: Learning robust representations with cross modal sharing
We demonstrate empirically that, using a joint network to train across modalities leads to meaningful information sharing and this allows us to achieve state-of-the-art results on most of the benchmarks.
Deep Learning-based 3D Point Cloud Classification: A Systematic Survey and Outlook
Point cloud classification is the basis of point cloud analysis, and many deep learning-based methods have been widely used in this task.
Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey
To encourage future research, this survey summarizes the current progress on adversarial attack and defense techniques on point cloud classification. This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods in recent years.
Evaluating Machine Learning Models with NERO: Non-Equivariance Revealed on Orbits
NERO evaluation is consist of a task-agnostic interactive interface and a set of visualizations, called NERO plots, which reveals the equivariance property of the model.
GTNet: Graph Transformer Network for 3D Point Cloud Classification and Semantic Segmentation
Local Transformer uses a dynamic graph to calculate all neighboring point weights by intra-domain cross-attention with dynamically updated graph relations, so that every neighboring point could affect the features of centroid with different weights; Global Transformer enlarges the receptive field of Local Transformer by a global self-attention.
Connecting Multi-modal Contrastive Representations
This paper proposes a novel training-efficient method for learning MCR without paired data called Connecting Multi-modal Contrastive Representations (C-MCR).
Multi-view Vision-Prompt Fusion Network: Can 2D Pre-trained Model Boost 3D Point Cloud Data-scarce Learning?
Then, a novel multi-view prompt fusion module is developed to effectively fuse information from different views to bridge the gap between 3D point cloud data and 2D pre-trained models.