Few-Shot Object Detection with Attention-RPN and Multi-Relation Detector

CVPR 2020  ·  Qi Fan, Wei Zhuo, Chi-Keung Tang, Yu-Wing Tai ·

Conventional methods for object detection typically require a substantial amount of training data and preparing such high-quality training data is very labor-intensive. In this paper, we propose a novel few-shot object detection network that aims at detecting objects of unseen categories with only a few annotated examples. Central to our method are our Attention-RPN, Multi-Relation Detector and Contrastive Training strategy, which exploit the similarity between the few shot support set and query set to detect novel objects while suppressing false detection in the background. To train our network, we contribute a new dataset that contains 1000 categories of various objects with high-quality annotations. To the best of our knowledge, this is one of the first datasets specifically designed for few-shot object detection. Once our few-shot network is trained, it can detect objects of unseen categories without further training or fine-tuning. Our method is general and has a wide range of potential applications. We produce a new state-of-the-art performance on different datasets in the few-shot setting. The dataset link is https://github.com/fanq15/Few-Shot-Object-Detection-Dataset.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


Introduced in the Paper:

FSOD

Used in the Paper:

ImageNet MS COCO Cityscapes KITTI
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Object Detection MS-COCO (10-shot) FSOD AP 11.1 # 21

Methods


No methods listed for this paper. Add relevant methods here