Mixed Pseudo Labels for Semi-Supervised Object Detection

12 Dec 2023  ·  Zeming Chen, Wenwei Zhang, Xinjiang Wang, Kai Chen, Zhi Wang ·

While the pseudo-label method has demonstrated considerable success in semi-supervised object detection tasks, this paper uncovers notable limitations within this approach. Specifically, the pseudo-label method tends to amplify the inherent strengths of the detector while accentuating its weaknesses, which is manifested in the missed detection of pseudo-labels, particularly for small and tail category objects. To overcome these challenges, this paper proposes Mixed Pseudo Labels (MixPL), consisting of Mixup and Mosaic for pseudo-labeled data, to mitigate the negative impact of missed detections and balance the model's learning across different object scales. Additionally, the model's detection performance on tail categories is improved by resampling labeled data with relevant instances. Notably, MixPL consistently improves the performance of various detectors and obtains new state-of-the-art results with Faster R-CNN, FCOS, and DINO on COCO-Standard and COCO-Full benchmarks. Furthermore, MixPL also exhibits good scalability on large models, improving DINO Swin-L by 2.5% mAP and achieving nontrivial new records (60.2% mAP) on the COCO val2017 benchmark without extra annotations.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Semi-Supervised Object Detection COCO 100% labeled data MixPL mAP 55.2 # 1
Semi-Supervised Object Detection COCO 10% labeled data MixPL mAP 44.6 # 1
detector DINO-Res50 # 1
Semi-Supervised Object Detection COCO 1% labeled data MixPL mAP 31.7 # 1
Semi-Supervised Object Detection COCO 2% labeled data MixPL mAP 34.7 # 1
Semi-Supervised Object Detection COCO 5% labeled data MixPL mAP 40.1 # 1

Methods