BBAM: Bounding Box Attribution Map for Weakly Supervised Semantic and Instance Segmentation

CVPR 2021  ยท  Jungbeom Lee, Jihun Yi, Chaehun Shin, Sungroh Yoon ยท

Weakly supervised segmentation methods using bounding box annotations focus on obtaining a pixel-level mask from each box containing an object. Existing methods typically depend on a class-agnostic mask generator, which operates on the low-level information intrinsic to an image. In this work, we utilize higher-level information from the behavior of a trained object detector, by seeking the smallest areas of the image from which the object detector produces almost the same result as it does from the whole image. These areas constitute a bounding-box attribution map (BBAM), which identifies the target object in its bounding box and thus serves as pseudo ground-truth for weakly supervised semantic and instance segmentation. This approach significantly outperforms recent comparable techniques on both the PASCAL VOC and MS COCO benchmarks in weakly supervised semantic and instance segmentation. In addition, we provide a detailed analysis of our method, offering deeper insight into the behavior of the BBAM.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Box-supervised Instance Segmentation COCO test-dev BBAM mask AP 25.7 # 6
Weakly-supervised instance segmentation PASCAL VOC 2012 val BBAM mAP@0.25 76.8 # 1
mAP@0.5 63.7 # 1
mAP@0.75 31.8 # 1
Average Best Overlap 63.0 # 1
Box-supervised Instance Segmentation PASCAL VOC 2012 val BBAM AP_25 76.8 # 2
AP_50 63.7 # 2
AP_70 39.5 # 3
AP_75 31.8 # 4

Methods


No methods listed for this paper. Add relevant methods here