3D Object Classification
42 papers with code • 3 benchmarks • 6 datasets
3D Object Classification is the task of predicting the class of a 3D object point cloud. It is a voxel level prediction where each voxel is classified into a category. The popular benchmark for this task is the ModelNet dataset. The models for this task are usually evaluated with the Classification Accuracy metric.
Image: Sedaghat et al
Datasets
Latest papers with no code
Unsupervised Contrastive Learning with Simple Transformation for 3D Point Cloud Data
Though a number of point cloud learning methods have been proposed to handle unordered points, most of them are supervised and require labels for training.
LATFormer: Locality-Aware Point-View Fusion Transformer for 3D Shape Recognition
To investigate this, we propose a novel Locality-Aware Point-View Fusion Transformer (LATFormer) for 3D shape retrieval and classification.
Point Discriminative Learning for Data-efficient 3D Point Cloud Analysis
In this work we propose PointDisc, a point discriminative learning method to leverage self-supervisions for data-efficient 3D point cloud classification and segmentation.
ABD-Net: Attention Based Decomposition Network for 3D Point Cloud Decomposition
The encapsulated local features are further passed to proposed Attention Feature Encoder to learn basic shapes in point cloud.
Dense Graph Convolutional Neural Networks on 3D Meshes for 3D Object Segmentation and Classification
This paper presents new designs of graph convolutional neural networks (GCNs) on 3D meshes for 3D object segmentation and classification.
Cross-Level Cross-Scale Cross-Attention Network for Point Cloud Representation
First, a point-wise feature pyramid module is introduced to hierarchically extract features from different scales or resolutions.
Sim2Real 3D Object Classification using Spherical Kernel Point Convolution and a Deep Center Voting Scheme
While object semantic understanding is essential for most service robotic tasks, 3D object classification is still an open problem.
Self-Supervised Multi-View Learning via Auto-Encoding 3D Transformations
Then, we self-train a representation to capture the intrinsic 3D object representation by decoding 3D transformation parameters from the fused feature representations of multiple views before and after the transformation.
Spherical Transformer: Adapting Spherical Signal to CNNs
To this end, the proposed method first uses local structured sampling methods such as HEALPix to construct a transformer grid by using the information of spherical points and its adjacent points, and then transforms the spherical signals to the vectors through the grid.
Generative VoxelNet: Learning Energy-Based Models for 3D Shape Synthesis and Analysis
3D data that contains rich geometry information of objects and scenes is valuable for understanding 3D physical world.