Not All Voxels Are Equal: Semantic Scene Completion from the Point-Voxel Perspective

24 Dec 2021  ·  Xiaokang Chen, Jiaxiang Tang, Jingbo Wang, Gang Zeng ·

We revisit Semantic Scene Completion (SSC), a useful task to predict the semantic and occupancy representation of 3D scenes, in this paper. A number of methods for this task are always based on voxelized scene representations for keeping local scene structure. However, due to the existence of visible empty voxels, these methods always suffer from heavy computation redundancy when the network goes deeper, and thus limit the completion quality. To address this dilemma, we propose our novel point-voxel aggregation network for this task. Firstly, we transfer the voxelized scenes to point clouds by removing these visible empty voxels and adopt a deep point stream to capture semantic information from the scene efficiently. Meanwhile, a light-weight voxel stream containing only two 3D convolution layers preserves local structures of the voxelized scenes. Furthermore, we design an anisotropic voxel aggregation operator to fuse the structure details from the voxel stream into the point stream, and a semantic-aware propagation module to enhance the up-sampling process in the point stream by semantic labels. We demonstrate that our model surpasses state-of-the-arts on two benchmarks by a large margin, with only depth images as the input.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Semantic Scene Completion NYUv2 Point-Voxel Aggregation Network mIoU 46 # 4

Methods