3D Classification
33 papers with code • 0 benchmarks • 11 datasets
Benchmarks
These leaderboards are used to track progress in 3D Classification
Libraries
Use these libraries to find 3D Classification models and implementationsDatasets
- ShapeNetCore
- ModelNet40-C
- RAD-ChestCT Dataset
- Teeth3DS
- ADHD-200
- Calcium imaging of glomeruli in the olfactory bulb of the mouse in response to thirty-five monomolecular odors
- CVB
- 3D-Point Cloud dataset of various geometrical terrains
- Corn Seeds Dataset
- VIDIMU: Multimodal video and IMU kinematic dataset on daily life activities using affordable devices
Latest papers
ViT-Lens: Initiating Omni-Modal Exploration through 3D Insights
A well-trained lens with a ViT backbone has the potential to serve as one of these foundation models, supervising the learning of subsequent modalities.
Robustifying Point Cloud Networks by Refocusing
In this study, we develop a general mechanism to increase neural network robustness based on focus analysis.
Beyond First Impressions: Integrating Joint Multi-modal Cues for Comprehensive 3D Representation
Insufficient synergy neglects the idea that a robust 3D representation should align with the joint vision-language space, rather than independently aligning with each modality.
OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding
Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation.
MVTN: Learning Multi-View Transformations for 3D Understanding
Multi-view projection techniques have shown themselves to be highly effective in achieving top-performing results in the recognition of 3D shapes.
ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding
Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets.
Local Neighborhood Features for 3D Classification
We train and evaluate PointNeXt on ModelNet40 (synthetic), ScanObjectNN (real-world), and a recent large-scale, real-world grocery dataset, i. e., 3DGrocery100.
PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
In this paper, we first collaborate CLIP and GPT to be a unified 3D open-world learner, named as PointCLIP V2, which fully unleashes their potential for zero-shot 3D classification, segmentation, and detection.
PointACL:Adversarial Contrastive Learning for Robust Point Clouds Representation under Adversarial Attack
Adversarial contrastive learning (ACL) is considered an effective way to improve the robustness of pre-trained models.
SimpleView++: Neighborhood Views for Point Cloud Classification
Among these methods, the Simple View model demonstrates that features from six orthogonal perspective projections of a point cloud achieved comparable 3D classification.