Instance-aware, Context-focused, and Memory-efficient Weakly Supervised Object Detection

Weakly supervised learning has emerged as a compelling tool for object detection by reducing the need for strong supervision during training. However, major challenges remain: (1) differentiation of object instances can be ambiguous; (2) detectors tend to focus on discriminative parts rather than entire objects; (3) without ground truth, object proposals have to be redundant for high recalls, causing significant memory consumption. Addressing these challenges is difficult, as it often requires to eliminate uncertainties and trivial solutions. To target these issues we develop an instance-aware and context-focused unified framework. It employs an instance-aware self-training algorithm and a learnable Concrete DropBlock while devising a memory-efficient sequential batch back-propagation. Our proposed method achieves state-of-the-art results on COCO ($12.1\% ~AP$, $24.8\% ~AP_{50}$), VOC 2007 ($54.9\% ~AP$), and VOC 2012 ($52.1\% ~AP$), improving baselines by great margins. In addition, the proposed method is the first to benchmark ResNet based models and weakly supervised video object detection. Code, models, and more details will be made available at: https://github.com/NVlabs/wetectron.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Weakly Supervised Object Detection COCO test-dev wetectron(single-model, VGG16) AP50 24.8 # 1
Weakly Supervised Object Detection PASCAL VOC 2007 wetectron(single-model) MAP 54.9 # 4
Weakly Supervised Object Detection PASCAL VOC 2007 wetectron (single mode, 07+12) MAP 58.1 # 1
Weakly Supervised Object Detection PASCAL VOC 2012 test wetectron(single-model) MAP 52.1 # 3

Methods