Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks

Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Results from the Paper


 Ranked #1 on Image Retrieval on MS COCO (Recall@10 metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Cross-Modal Retrieval COCO 2014 Oscar Image-to-text R@1 73.5 # 17
Image-to-text R@10 96.0 # 16
Image-to-text R@5 92.2 # 17
Text-to-image R@1 57.5 # 19
Text-to-image R@10 89.8 # 16
Text-to-image R@5 82.8 # 18
Image Captioning COCO Captions Oscar BLEU-4 41.7 # 11
METEOR 30.6 # 11
CIDER 140 # 18
SPICE 24.5 # 12
Image-text matching CommercialAdsDataset OSCAR ADD(S) AUC 87.45 # 4
Image-to-Text Retrieval MS COCO Oscar Recall@10 99.8 # 1
Image Retrieval MS COCO Oscar Recall@10 98.3 # 1
Image Captioning nocaps-val-overall OSCAR CIDEr 80.9 # 11
SPICE 11.3 # 10
Pretrain (#images) 345M # 11
Visual Question Answering (VQA) VQA v2 test-dev Oscar Accuracy 73.82 # 20

Methods