An Empirical Study of Training Self-Supervised Vision Transformers

ICCV 2021  ·  Xinlei Chen, Saining Xie, Kaiming He ·

This paper does not describe a novel method. Instead, it studies a straightforward, incremental, yet must-know baseline given the recent progress in computer vision: self-supervised learning for Vision Transformers (ViT). While the training recipes for standard convolutional networks have been highly mature and robust, the recipes for ViT are yet to be built, especially in the self-supervised scenarios where training becomes more challenging. In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT. We observe that instability is a major issue that degrades accuracy, and it can be hidden by apparently good results. We reveal that these results are indeed partial failure, and they can be improved when training is made more stable. We benchmark ViT results in MoCo v3 and several other self-supervised frameworks, with ablations in various aspects. We discuss the currently positive evidence as well as challenges and open questions. We hope that this work will provide useful data points and experience for future research.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-Supervised Image Classification ImageNet MoCo v3 (ViT-B/16) Top 1 Accuracy 76.7% # 55
Number of Params 86M # 35
Self-Supervised Image Classification ImageNet MoCo v3 (ViT-BN-L/7) Top 1 Accuracy 81.0% # 19
Number of Params 304M # 25
Self-Supervised Image Classification ImageNet MoCo v3 (ViT-L) Top 1 Accuracy 77.6% # 49
Number of Params 307M # 16
Self-Supervised Image Classification ImageNet MoCo v3 (ViT-BN-H) Top 1 Accuracy 79.1% # 36
Self-Supervised Image Classification ImageNet MoCo v3 (ViT-H) Top 1 Accuracy 78.1% # 46
Number of Params 632M # 6
Self-Supervised Image Classification ImageNet MoCo v3 (ViT-H) Number of Params 632M # 6
Self-Supervised Image Classification ImageNet (finetuned) MoCo v3 (ViT-L/16) Number of Params 304M # 23
Top 1 Accuracy 84.1% # 35
Self-Supervised Image Classification ImageNet (finetuned) MoCo v3 (ViT-B/16) Number of Params 86M # 36
Top 1 Accuracy 83.2% # 44
Out-of-Distribution Generalization ImageNet-W MoCov3 (ViT-B/16, linear probing) IN-W Gap -16.0 # 1
Carton Gap +22 # 1
Out-of-Distribution Generalization ImageNet-W MoCov3 (ResNet-50, linear probing) IN-W Gap -20.7 # 1
Carton Gap +44 # 1

Methods