EdgeNet: Semantic Scene Completion from a Single RGB-D Image

8 Aug 2019  ·  Aloisio Dourado, Teofilo Emidio de Campos, Hansung Kim, Adrian Hilton ·

Semantic scene completion is the task of predicting a complete 3D representation of volumetric occupancy with corresponding semantic labels for a scene from a single point of view. Previous works on Semantic Scene Completion from RGB-D data used either only depth or depth with colour by projecting the 2D image into the 3D volume resulting in a sparse data representation. In this work, we present a new strategy to encode colour information in 3D space using edge detection and flipped truncated signed distance. We also present EdgeNet, a new end-to-end neural network architecture capable of handling features generated from the fusion of depth and edge information. Experimental results show improvement of 6.9% over the state-of-the-art result on real data, for end-to-end approaches.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Semantic Scene Completion NYUv2 EdgeNet mIoU 27.8 # 22

Methods


No methods listed for this paper. Add relevant methods here