Align before Fuse: Vision and Language Representation Learning with Momentum Distillation

Large-scale vision and language representation learning has shown promising improvements on various vision-language tasks. Most existing methods employ a transformer-based multimodal encoder to jointly model visual tokens (region-based image features) and word tokens. Because the visual tokens and word tokens are unaligned, it is challenging for the multimodal encoder to learn image-text interactions. In this paper, we introduce a contrastive loss to ALign the image and text representations BEfore Fusing (ALBEF) them through cross-modal attention, which enables more grounded vision and language representation learning. Unlike most existing methods, our method does not require bounding box annotations nor high-resolution images. In order to improve learning from noisy web data, we propose momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model. We provide a theoretical analysis of ALBEF from a mutual information maximization perspective, showing that different training tasks can be interpreted as different ways to generate views for an image-text pair. ALBEF achieves state-of-the-art performance on multiple downstream vision-language tasks. On image-text retrieval, ALBEF outperforms methods that are pre-trained on orders of magnitude larger datasets. On VQA and NLVR$^2$, ALBEF achieves absolute improvements of 2.37% and 3.84% compared to the state-of-the-art, while enjoying faster inference speed. Code and pre-trained models are available at https://github.com/salesforce/ALBEF/.

PDF Abstract NeurIPS 2021 PDF NeurIPS 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-Shot Cross-Modal Retrieval COCO 2014 ALBEF Image-to-text R@1 68.7 # 7
Image-to-text R@5 89.5 # 5
Image-to-text R@10 94.7 # 4
Text-to-image R@1 50.1 # 7
Text-to-image R@5 76.4 # 5
Text-to-image R@10 84.5 # 5
Cross-Modal Retrieval COCO 2014 ALBEF Image-to-text R@1 77.6 # 12
Image-to-text R@10 97.2 # 10
Image-to-text R@5 94.3 # 12
Text-to-image R@1 60.7 # 15
Text-to-image R@10 90.5 # 12
Text-to-image R@5 84.3 # 14
Image-text matching CommercialAdsDataset ALBEF ADD(S) AUC 82.74 # 8
Zero-Shot Cross-Modal Retrieval Flickr30k ALBEF Image-to-text R@1 90.5 # 9
Image-to-text R@5 98.8 # 11
Image-to-text R@10 99.7 # 7
Text-to-image R@1 76.8 # 11
Text-to-image R@5 93.7 # 12
Text-to-image R@10 96.7 # 10
Image-to-Text Retrieval Flickr30k ALBEF Recall@1 95.9 # 7
Recall@5 99.8 # 7
Recall@10 100.0 # 1
Visual Reasoning NLVR2 Dev ALBEF (14M) Accuracy 83.14 # 11
Visual Reasoning NLVR2 Test ALBEF (14M) Accuracy 82.55 # 9
Open Vocabulary Attribute Detection OVAD-Box benchmark ALBEF mean average precision 21.0 # 5
Visual Question Answering (VQA) VQA v2 test-dev ALBEF (14M) Accuracy 75.84 # 19
Visual Question Answering (VQA) VQA v2 test-std ALBEF (14M) overall 76.04 # 15

Methods