Semantic Point Completion Network for 3D Semantic Scene Completion

ECAI 2020  ·  Min Zhong, Gang Zeng ·

Semantic scene completion (SSC) is composed of scene completion (SC) and semantic segmentation. Most of the existing methods carry out SSC in a regular 3D grid space, where 3D CNNs cause unnecessary computational cost on empty voxels. In this work, a Semantic Point Completion Network (SPCNet) is proposed to address SSC in the point cloud space. Specifically, SPCNet is an Encoder-decoder architecture, in which an Observed Point Encoder is applied to extract the features of observed points, and an Observed to Occluded Point Decoder is responsible for mapping the features to the occluded points. Based on the SPCNet, we further introduce an Image-point Fused Semantic Point Completion Network (IPFSPCNet), which aims to boost the performance of SSC by combining the texture with geometry information. Evaluations are conducted on two public datasets. Experimental results show that our method can address the SC problem in the point cloud space. Compared to stateof-the-art approaches, our method can achieve satisfying results on the SSC task.

PDF

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Semantic Scene Completion NYUv2 IPF-SPCNet: Semantic point completion network for 3D semantic scene completion mIoU 35.1 # 8

Methods


No methods listed for this paper. Add relevant methods here