MDETR -- Modulated Detection for End-to-End Multi-Modal Understanding

26 Apr 2021  ·  Aishwarya Kamath, Mannat Singh, Yann Lecun, Gabriel Synnaeve, Ishan Misra, Nicolas Carion ·

Multi-modal reasoning systems rely on a pre-trained object detector to extract regions of interest from the image. However, this crucial module is typically used as a black box, trained independently of the downstream task and on a fixed vocabulary of objects and attributes. This makes it challenging for such systems to capture the long tail of visual concepts expressed in free form text. In this paper we propose MDETR, an end-to-end modulated detector that detects objects in an image conditioned on a raw text query, like a caption or a question. We use a transformer-based architecture to reason jointly over text and image by fusing the two modalities at an early stage of the model. We pre-train the network on 1.3M text-image pairs, mined from pre-existing multi-modal datasets having explicit alignment between phrases in text and objects in the image. We then fine-tune on several downstream tasks such as phrase grounding, referring expression comprehension and segmentation, achieving state-of-the-art results on popular benchmarks. We also investigate the utility of our model as an object detector on a given label set when fine-tuned in a few-shot setting. We show that our pre-training approach provides a way to handle the long tail of object categories which have very few labelled instances. Our approach can be easily extended for visual question answering, achieving competitive performance on GQA and CLEVR. The code and models are available at https://github.com/ashkamath/mdetr.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Visual Question Answering (VQA) CLEVR MDETR Accuracy 99.7 # 2
Visual Question Answering (VQA) CLEVR-Humans MDETR Accuracy 81.7 # 1
Referring Expression Comprehension CLEVR-Ref+ MDETR Accuracy 100 # 1
Phrase Grounding Flickr30k Entities Test MDETR-ENB5 R@1 84.3 # 5
R@10 95.8 # 3
R@5 93.9 # 3
Visual Question Answering (VQA) GQA test-std MDETR-ENB5 Accuracy 62.45 # 3
Generalized Referring Expression Comprehension gRefCOCO MDETR Precision@(F1=1, IoU≥0.5) 41.5 # 2
N-acc. 36.1 # 2
Referring Expression Segmentation PhraseCut MDETR ENB3 Mean IoU 53.7 # 3
Pr@0.5 57.5 # 1
Pr@0.7 39.9 # 1
Pr@0.9 11.9 # 1
Referring Expression Comprehension RefCoco+ MDETR-ENB3 Val 81.13 # 7
Test A 85.52 # 6
Test B 72.96 # 6
Referring Expression Comprehension RefCOCO Deformerable-MDETR Val 86.54 # 10
Test A 89.16 # 9
Test B 83.00 # 8
Referring Expression Comprehension RefCOCO MDETR-ENB3 Val 87.51 # 9
Test A 90.4 # 8
Test B 82.67 # 9
Referring Expression Comprehension RefCOCOg-test MDETR ENB3 Accuracy 83.31 # 7
Referring Expression Comprehension RefCOCOg-val MDETR ENB3 Accuracy 83.35 # 8
Referring Image Matting (Keyword-based) RefMatte MDETR (ResNet-101) SAD 32.27 # 4
MSE 0.0137 # 4
MAD 0.0183 # 4
SAD(E) 33.52 # 4
MSE(E) 0.0141 # 4
MAD(E) 0.0190 # 4
Referring Image Matting (Expression-based) RefMatte MDETR (ResNet-101) SAD 84.70 # 4
MSE 0.0434 # 4
MAD 0.0482 # 4
SAD(E) 90.45 # 4
MSE(E) 0.0463 # 4
MAD(E) 0.0515 # 4
Referring Image Matting (RefMatte-RW100) RefMatte MDETR (ResNet-101) SAD 131.58 # 3
MSE 0.0675 # 3
MAD 0.0751 # 3
SAD(E) 136.59 # 3
MSE(E) 0.0700 # 3
MAD(E) 0.0779 # 3

Methods