Context Prior for Scene Segmentation

Recent works have widely explored the contextual dependencies to achieve more accurate segmentation results. However, most approaches rarely distinguish different types of contextual dependencies, which may pollute the scene understanding. In this work, we directly supervise the feature aggregation to distinguish the intra-class and inter-class context clearly. Specifically, we develop a Context Prior with the supervision of the Affinity Loss. Given an input image and corresponding ground truth, Affinity Loss constructs an ideal affinity map to supervise the learning of Context Prior. The learned Context Prior extracts the pixels belonging to the same category, while the reversed prior focuses on the pixels of different classes. Embedded into a conventional deep CNN, the proposed Context Prior Layer can selectively capture the intra-class and inter-class contextual dependencies, leading to robust feature representation. To validate the effectiveness, we design an effective Context Prior Network (CPNet). Extensive quantitative and qualitative evaluations demonstrate that the proposed model performs favorably against state-of-the-art semantic segmentation approaches. More specifically, our algorithm achieves 46.3% mIoU on ADE20K, 53.9% mIoU on PASCAL-Context, and 81.3% mIoU on Cityscapes. Code is available at https://git.io/ContextPrior.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semantic Segmentation ADE20K CPN(ResNet-101) Validation mIoU 46.27 # 171
Scene Understanding ADE20K val CPN(ResNet-101) Mean IoU 46.3 # 1
Semantic Segmentation ADE20K val CPN(ResNet-101) mIoU 46.27 # 67
Semantic Segmentation Cityscapes test CPN(ResNet-101) Mean IoU (class) 81.3% # 42
Semantic Segmentation PASCAL Context CPN(ResNet-101) mIoU 53.9 # 35

Methods