Spatial-information Guided Adaptive Context-aware Network for Efficient RGB-D Semantic Segmentation

11 Aug 2023  ·  Yang Zhang, Chenyun Xiong, Junjie Liu, Xuhui Ye, Guodong Sun ·

Efficient RGB-D semantic segmentation has received considerable attention in mobile robots, which plays a vital role in analyzing and recognizing environmental information. According to previous studies, depth information can provide corresponding geometric relationships for objects and scenes, but actual depth data usually exist as noise. To avoid unfavorable effects on segmentation accuracy and computation, it is necessary to design an efficient framework to leverage cross-modal correlations and complementary cues. In this paper, we propose an efficient lightweight encoder-decoder network that reduces the computational parameters and guarantees the robustness of the algorithm. Working with channel and spatial fusion attention modules, our network effectively captures multi-level RGB-D features. A globally guided local affinity context module is proposed to obtain sufficient high-level context information. The decoder utilizes a lightweight residual unit that combines short- and long-distance information with a few redundant computations. Experimental results on NYUv2, SUN RGB-D, and Cityscapes datasets show that our method achieves a better trade-off among segmentation accuracy, inference time, and parameters than the state-of-the-art methods. The source code will be at https://github.com/MVME-HBUT/SGACNet

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation NYU Depth v2 SGACNet (R34-NBt1D) Mean IoU 49.4% # 57
Semantic Segmentation NYU Depth v2 SGACNet (R18-NBt1D) Mean IoU 48.2% # 67

Methods


No methods listed for this paper. Add relevant methods here