Learning Transferable Visual Models From Natural Language Supervision

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.

PDF Abstract
โ†ณ Quickstart in
22,144
โ†ณ Quickstart in
8,429
โ†ณ Quickstart in
3,229
See all 58 implementations

Results from the Paper


 Ranked #1 on Zero-Shot Learning on COCO-MLT (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Zero-Shot Transfer Image Classification aYahoo CLIP Accuracy 98.4 # 1
Prompt Engineering Caltech-101 CLIP Harmonic mean 95.40 # 10
Zero-Shot Cross-Modal Retrieval COCO 2014 CLIP Image-to-text R@1 58.4 # 13
Image-to-text R@5 81.5 # 14
Image-to-text R@10 88.1 # 13
Text-to-image R@1 37.8 # 14
Text-to-image R@5 62.4 # 14
Text-to-image R@10 72.2 # 13
Long-tail Learning COCO-MLT CLIP(ViT-B/16) Average mAP 60.17 # 2
Long-tail Learning COCO-MLT CLIP(ResNet-50) Average mAP 56.19 # 5
Zero-Shot Learning COCO-MLT ViT-B/16 Average mAP 60.17 # 2
Zero-Shot Learning COCO-MLT ResNet-50 Average mAP 56.19 # 1
Prompt Engineering DTD CLIP Harmonic mean 56.37 # 10
Prompt Engineering EuroSAT CLIP Harmonic mean 60.03 # 10
Prompt Engineering FGVC-Aircraft CLIP Harmonic mean 31.09 # 9
Zero-Shot Cross-Modal Retrieval Flickr30k CLIP Image-to-text R@1 88.0 # 13
Image-to-text R@5 98.7 # 13
Image-to-text R@10 99.4 # 12
Text-to-image R@1 68.7 # 16
Text-to-image R@5 90.6 # 16
Text-to-image R@10 95.2 # 13
Object Categorization GRIT CLIP Categorization (ablation) 48.1 # 3
Meme Classification Hateful Memes CLIP (zero-shot) ROC-AUC 0.661 # 9
Zero-Shot Transfer Image Classification ImageNet CLIP๏ผˆViT-L/14-336px๏ผ‰ Accuracy (Private) 76.2 # 18
Zero-Shot Transfer Image Classification ImageNet CLIP (ResNet50) Accuracy (Private) 59.6 # 22
Prompt Engineering ImageNet CLIP Harmonic mean 70.22 # 11
Zero-Shot Transfer Image Classification ImageNet CLIP Accuracy (Public) 31.3 # 3
Semi-Supervised Image Classification ImageNet - 0.2% labeled data CLIP (ResNet-50) ImageNet Top-1 Accuracy 40% # 3
Few-Shot Image Classification ImageNet - 0-Shot CLIP (ViT B/32) Accuracy 63.2% # 2
Few-Shot Image Classification ImageNet - 0-Shot CLIP (ResNet50) Accuracy 59.6% # 3
Prompt Engineering ImageNet-A CLIP Top-1 accuracy % 47.77 # 7
Zero-Shot Transfer Image Classification ImageNet-A CLIP Accuracy (Private) 77.2 # 10
Accuracy (Public) - # 2
Prompt Engineering ImageNet-R CLIP Top-1 accuracy % 73.96 # 7
Zero-Shot Transfer Image Classification ImageNet-R CLIP Accuracy 88.9 # 10
Prompt Engineering ImageNet-S CLIP Top-1 accuracy % 46.15 # 7
Zero-Shot Transfer Image Classification ImageNet V2 CLIP Accuracy (Private) 70.1 # 10
Accuracy (Public) - # 2
Prompt Engineering ImageNet V2 CLIP Top-1 accuracy % 60.83 # 6
Out-of-Distribution Generalization ImageNet-W CLIP (ViT-L/14, zero-shot, LAION-400M) IN-W Gap -4.9 # 1
Carton Gap +12 # 1
Out-of-Distribution Generalization ImageNet-W CLIP (ViT-H/14, zero-shot, LAION-2B) IN-W Gap -3.6 # 1
Carton Gap +16 # 1
Out-of-Distribution Generalization ImageNet-W CLIP (ViT-G/14, zero-shot, LAION-2B) IN-W Gap -3.8 # 1
Carton Gap +12 # 1
Out-of-Distribution Generalization ImageNet-W CLIP (ViT-L/14, zero-shot, WIT) IN-W Gap -4.4 # 1
Carton Gap +12 # 1
Zero-Shot Transfer Image Classification ObjectNet CLIP Accuracy (Private) 72.3 # 8
Accuracy (Public) - # 2
Image Classification ObjectNet CLIP Top-1 Accuracy 72.3 # 10
Image Classification OmniBenchmark CLIP-RN50 Average Top-1 Accuracy 42.1 # 5
Open Vocabulary Attribute Detection OVAD-Box benchmark CLIP VIT-B16 mean average precision 16.6 # 7
Prompt Engineering Oxford 102 Flower CLIP Harmonic mean 74.83 # 10
Prompt Engineering Oxford-IIIT Pet Dataset CLIP Harmonic mean 94.12 # 10
Action Recognition RareAct CLIP mWAP 40.7 # 2
Prompt Engineering Stanford Cars CLIP Harmonic mean 68.65 # 10
Zero-Shot Transfer Image Classification SUN CLIP Accuracy 58.5 # 2
Prompt Engineering SUN397 CLIP Harmonic mean 72.23 # 10
Prompt Engineering UCF101 CLIP Harmonic mean 73.85 # 10
Zero-Shot Learning VOC-MLT CLIP(ViT-B/16) Average mAP 85.77 # 2
Long-tail Learning VOC-MLT CLIP(ViT-B/16) Average mAP 85.77 # 2
Long-tail Learning VOC-MLT CLIP(ResNet-50) Average mAP 84.30 # 4
Zero-Shot Learning VOC-MLT CLIP(ResNet-50) Average mAP 84.30 # 1

Methods