Point-PlaneNet: Plane kernel based convolutional neural network for point clouds analysis

Point cloud is accepted as an adequate representation for 3D data and most 3D sensors have the ability to generate this data. Due to point cloud's irregular format, analyzing this data using deep learning algorithms is quite challenging. In this paper, a new convolutional neural network, called Point-PlaneNet, is proposed that uses the concept of the distance between points and planes in order to exploit spatial local correlations. In the proposed method an alternative simple local operation, called PlaneConv, is introduced which can extract local geometric features from point clouds by learning a set of planes in Rn space. Our network takes raw point clouds as input and therefore avoids the need to transform point clouds to images or volumes. PlaneConv has a simple theoretical analysis and is easy to incorporate into deep learning models to improve their performance. In order to evaluate the proposed method for classification, part segmentation and scene semantic segmentation tasks, it has been applied on four Datasets: ModelNet-40, MNIST, ShapeNet-Part and S3DIS. The experimental results show the acceptable performance of the proposed method compared to previous approaches in all tasks.

PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Point Cloud Classification ModelNet40 Point-PlaneNet Overall Accuracy 92.1 # 83
Mean Accuracy 90.5 # 26
Semantic Segmentation S3DIS Point-PlaneNet Mean IoU 54.8 # 48
oAcc 83.9 # 32
Number of params N/A # 1
Semantic Segmentation ShapeNet Point-PlaneNet Mean IoU 85.1 # 4
3D Part Segmentation ShapeNet-Part Point-PlaneNet Class Average IoU 82.5 # 26
Instance Average IoU 85.1 # 50

Methods


No methods listed for this paper. Add relevant methods here