Understanding Dark Scenes by Contrasting Multi-Modal Observations

23 Aug 2023  ·  Xiaoyu Dong, Naoto Yokoya ·

Understanding dark scenes based on multi-modal image data is challenging, as both the visible and auxiliary modalities provide limited semantic information for the task. Previous methods focus on fusing the two modalities but neglect the correlations among semantic classes when minimizing losses to align pixels with labels, resulting in inaccurate class predictions. To address these issues, we introduce a supervised multi-modal contrastive learning approach to increase the semantic discriminability of the learned multi-modal feature spaces by jointly performing cross-modal and intra-modal contrast under the supervision of the class correlations. The cross-modal contrast encourages same-class embeddings from across the two modalities to be closer and pushes different-class ones apart. The intra-modal contrast forces same-class or different-class embeddings within each modality to be together or apart. We validate our approach on a variety of tasks that cover diverse light conditions and image modalities. Experiments show that our approach can effectively enhance dark scene understanding based on multi-modal images with limited semantics by shaping semantic-discriminative feature spaces. Comparisons with previous methods demonstrate our state-of-the-art performance. Code and pretrained models are available at https://github.com/palmdong/SMMCL.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation LLRGBD-synthetic SMMCL (SegNeXt-B) mIoU 68.76 # 1
Semantic Segmentation LLRGBD-synthetic SMMCL (SegFormer-B2) mIoU 67.77 # 2
Semantic Segmentation LLRGBD-synthetic SMMCL (ResNet-101) mIoU 64.40 # 5
Semantic Segmentation NYU Depth v2 SMMCL (SegNeXt-B) Mean IoU 55.8% # 12
Semantic Segmentation NYU Depth v2 SMMCL (ResNet-101) Mean IoU 52.5% # 30
Semantic Segmentation NYU Depth v2 SMMCL (SegFormer-B2) Mean IoU 53.7% # 20

Methods