Revisiting Self-Supervised Visual Representation Learning

Unsupervised visual representation learning remains a largely unsolved problem in computer vision research. Among a big body of recently proposed approaches for unsupervised learning of visual representations, a class of self-supervised techniques achieves superior performance on many challenging benchmarks. A large number of the pretext tasks for self-supervised learning have been studied, but other important aspects, such as the choice of convolutional neural networks (CNN), has not received equal attention. Therefore, we revisit numerous previously proposed self-supervised models, conduct a thorough large scale study and, as a result, uncover multiple crucial insights. We challenge a number of common practices in selfsupervised visual representation learning and observe that standard recipes for CNN design do not always translate to self-supervised representation learning. As part of our study, we drastically boost the performance of previously proposed techniques and outperform previously published state-of-the-art results by a large margin.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-Supervised Image Classification ImageNet Revisited Rotation (RevNet-50 ×4) Top 1 Accuracy 55.4% # 119
Top 5 Accuracy 77.9% # 34
Self-Supervised Image Classification ImageNet Revisited Jigsaw (ResNet50v1 ×2) Top 1 Accuracy 44.6% # 125
Top 5 Accuracy 68.0% # 39
Self-Supervised Image Classification ImageNet Revisited Exemplar (ResNet-50v1 ×3) Top 1 Accuracy 46.0% # 124
Top 5 Accuracy 68.8% # 38
Self-Supervised Image Classification ImageNet Revisited Rel.Patch.Loc (ResNet50v1 ×2) Top 1 Accuracy 51.4% # 121
Top 5 Accuracy 74.0% # 36

Methods