Depth-Adapted CNNs for RGB-D Semantic Segmentation

8 Jun 2022  ·  Zongwei Wu, Guillaume Allibert, Christophe Stolz, Chao Ma, Cédric Demonceaux ·

Recent RGB-D semantic segmentation has motivated research interest thanks to the accessibility of complementary modalities from the input side. Existing works often adopt a two-stream architecture that processes photometric and geometric information in parallel, with few methods explicitly leveraging the contribution of depth cues to adjust the sampling position on RGB images. In this paper, we propose a novel framework to incorporate the depth information in the RGB convolutional neural network (CNN), termed Z-ACN (Depth-Adapted CNN). Specifically, our Z-ACN generates a 2D depth-adapted offset which is fully constrained by low-level features to guide the feature extraction on RGB images. With the generated offset, we introduce two intuitive and effective operations to replace basic CNN operators: depth-adapted convolution and depth-adapted average pooling. Extensive experiments on both indoor and outdoor semantic segmentation tasks demonstrate the effectiveness of our approach.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation NYU Depth v2 Z-ACN (ResNet-18) Mean IoU 47.02% # 75
Semantic Segmentation NYU Depth v2 Z-ACN (ResNet-34) Mean IoU 49.15% # 59
Semantic Segmentation NYU Depth v2 Z-ACN (ResNet-50) Mean IoU 50.05% # 54
Semantic Segmentation NYU Depth v2 Z-ACN (ResNet-101) Mean IoU 51.24% # 41

Methods