Three things everyone should know about Vision Transformers

18 Mar 2022  ·  Hugo Touvron, Matthieu Cord, Alaaeldin El-Nouby, Jakob Verbeek, Hervé Jégou ·

After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets.

PDF Abstract

Results from the Paper


Ranked #8 on Image Classification on CIFAR-10 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 ViT-B (attn fine-tune) Percentage correct 99.3 # 8
Image Classification CIFAR-100 ViT-L (attn fine-tune) Percentage correct 93.0 # 12
Image Classification Flowers-102 ViT-B (attn finetune) Accuracy 98.5 # 22
Image Classification ImageNet ViT-B (hMLP + BeiT) Top 1 Accuracy 83.4% # 394
Image Classification ImageNet ViT-B-18x2 Top 1 Accuracy 84.1% # 325
Image Classification ImageNet ViT-B@384 (attn finetune) Top 1 Accuracy 84.3% # 305
Image Classification ImageNet ViT-B-36x1 Top 1 Accuracy 84.1% # 325
Image Classification ImageNet ViT-L@384 (attn finetune) Top 1 Accuracy 85.5% # 212
Image Classification ImageNet ViT-S-48x1 Top 1 Accuracy 82.3% # 501
Image Classification ImageNet ViT-S-24x2 Top 1 Accuracy 82.6% # 474
Image Classification ImageNet V2 ViT-B-36x1 Top 1 Accuracy 73.9 # 19
Image Classification iNaturalist 2018 ViT-L (attn finetune) Top-1 Accuracy 75.3% # 21
Fine-Grained Image Classification Stanford Cars ViT-L (attn finetune) Accuracy 93.8% # 52

Methods