3D Graph Neural Networks for RGBD Semantic Segmentation

RGBD semantic segmentation requires joint reasoning about 2D appearance and 3D geometric information. In this paper we propose a 3D graph neural network (3DGNN) that builds a k-nearest neighbor graph on top of 3D point cloud. Each node in the graph corresponds to a set of points and is associated with a hidden representation vector initialized with an appearance feature extracted by a unary CNN from 2D images. Relying on recurrent functions, every node dynamically updates its hidden representation based on the current status and incoming messages from its neighbors. This propagation model is unrolled for a certain number of time steps and the final per-node representation is used for predicting the semantic class of each pixel. We use back-propagation through time to train the model. Extensive experiments on NYUD2 and SUN-RGBD datasets demonstrate the effectiveness of our approach.

PDF Abstract

Datasets


Results from the Paper


Ranked #30 on Semantic Segmentation on SUN-RGBD (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Semantic Segmentation NYU Depth v2 3DGNN Mean IoU 43.1% # 89
Semantic Segmentation SUN-RGBD PSD-ResNet50 Mean IoU 45.9% # 30

Methods


No methods listed for this paper. Add relevant methods here