Pixel Contrastive-Consistent Semi-Supervised Semantic Segmentation

We present a novel semi-supervised semantic segmentation method which jointly achieves two desiderata of segmentation model regularities: the label-space consistency property between image augmentations and the feature-space contrastive property among different pixels. We leverage the pixel-level L2 loss and the pixel contrastive loss for the two purposes respectively. To address the computational efficiency issue and the false negative noise issue involved in the pixel contrastive loss, we further introduce and investigate several negative sampling techniques. Extensive experiments demonstrate the state-of-the-art performance of our method (PC2Seg) with the DeepLab-v3+ architecture, in several challenging semi-supervised settings derived from the VOC, Cityscapes, and COCO datasets.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Semantic Segmentation COCO 1/128 labeled PC2Seg Validation mIoU 40.1 # 5
Semi-Supervised Semantic Segmentation COCO 1/256 labeled PC2Seg Validation mIoU 37.5 # 5
Semi-Supervised Semantic Segmentation COCO 1/32 labeled PC2Seg Validation mIoU 46.1 # 4
Semi-Supervised Semantic Segmentation COCO 1/512 labeled PC2Seg Validation mIoU 29.9 # 5
Semi-Supervised Semantic Segmentation COCO 1/64 labeled PC2Seg Validation mIoU 43.7 # 5

Methods


No methods listed for this paper. Add relevant methods here