Deformable DETR: Deformable Transformers for End-to-End Object Detection

DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https://github.com/fundamentalvision/Deformable-DETR.

PDF Abstract ICLR 2021 PDF ICLR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Detection COCO-O Deformable-DETR (ResNet-50) Average mAP 18.5 # 33
Effective Robustness -1.49 # 37
Object Detection COCO test-dev Deformable DETR (ResNeXt-101+DCN) box mAP 52.3 # 56
AP50 71.9 # 11
AP75 58.1 # 16
APS 34.4 # 16
APM 54.4 # 24
APL 65.6 # 15
Hardware Burden 17G # 1
Operations per network pass 17.3G # 1
2D Object Detection SARDet-100K Deformable DETR box mAP 50.0 # 6

Methods