Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion

CVPR 2021  ·  Shi Qiu, Saeed Anwar, Nick Barnes ·

Given the prominence of current 3D sensors, a fine-grained analysis on the basic point cloud data is worthy of further investigation. Particularly, real point cloud scenes can intuitively capture complex surroundings in the real world, but due to 3D data's raw nature, it is very challenging for machine perception. In this work, we concentrate on the essential visual task, semantic segmentation, for large-scale point cloud data collected in reality. On the one hand, to reduce the ambiguity in nearby points, we augment their local context by fully utilizing both geometric and semantic features in a bilateral structure. On the other hand, we comprehensively interpret the distinctness of the points from multiple resolutions and represent the feature map following an adaptive fusion method at point-level for accurate semantic segmentation. Further, we provide specific ablation studies and intuitive visualizations to validate our key modules. By comparing with state-of-the-art networks on three different benchmarks, we demonstrate the effectiveness of our network.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation S3DIS BAAF-Net Mean IoU 72.2 # 18
mAcc 83.1 # 9
oAcc 88.9 # 16
Number of params N/A # 1
Semantic Segmentation S3DIS Area5 BAAF-Net mIoU 65.4 # 36
oAcc 88.9 # 24
mAcc 73.1 # 25
Number of params N/A # 2
Semantic Segmentation Semantic3D BAAF-Net mIoU 75.4% # 6
oAcc 94.9% # 1
3D Semantic Segmentation SemanticKITTI BAAF-Net test mIoU 59.9% # 19

Methods


No methods listed for this paper. Add relevant methods here