SemPLeS: Semantic Prompt Learning for Weakly-Supervised Semantic Segmentation

22 Jan 2024  ·  Ci-Siang Lin, Chien-Yi Wang, Yu-Chiang Frank Wang, Min-Hung Chen ·

Weakly-Supervised Semantic Segmentation (WSSS) aims to train segmentation models using image data with only image-level supervision. Since precise pixel-level annotations are not accessible, existing methods typically focus on producing pseudo masks for training segmentation models by refining CAM-like heatmaps. However, the produced heatmaps may capture only the discriminative image regions of object categories or the associated co-occurring backgrounds. To address the issues, we propose a Semantic Prompt Learning for WSSS (SemPLeS) framework, which learns to effectively prompt the CLIP latent space to enhance the semantic alignment between the segmented regions and the target object categories. More specifically, we propose Contrastive Prompt Learning and Prompt-guided Semantic Refinement to learn the prompts that adequately describe and suppress the co-occurring backgrounds associated with each target object category. In this way, SemPLeS can perform better semantic alignment between object regions and the associated class labels, resulting in desired pseudo masks for training the segmentation model. The proposed SemPLeS framework achieves SOTA performance on the standard WSSS benchmarks, PASCAL VOC and MS COCO, and shows compatibility with other WSSS methods. The source codes are provided in the supplementary.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Weakly-Supervised Semantic Segmentation COCO 2014 val SemPLeS (Swin-L) mIoU 56.1 # 2
Weakly-Supervised Semantic Segmentation PASCAL VOC 2012 test SemPLeS (Swin-L) Mean IoU 82.9 # 1
Weakly-Supervised Semantic Segmentation PASCAL VOC 2012 val SemPLeS (Swin-L) Mean IoU 83.4 # 1

Methods