Billion-scale semi-supervised learning for image classification

2 May 2019  ·  I. Zeki Yalniz, Hervé Jégou, Kan Chen, Manohar Paluri, Dhruv Mahajan ·

This paper presents a study of semi-supervised learning with large convolutional networks. We propose a pipeline, based on a teacher/student paradigm, that leverages a large collection of unlabelled images (up to 1 billion). Our main goal is to improve the performance for a given target architecture, like ResNet-50 or ResNext. We provide an extensive analysis of the success factors of our approach, which leads us to formulate some recommendations to produce high-accuracy models for image classification with semi-supervised learning. As a result, our approach brings important gains to standard architectures for image, video and fine-grained classification. For instance, by leveraging one billion unlabelled images, our learned vanilla ResNet-50 achieves 81.2% top-1 accuracy on the ImageNet benchmark.

PDF Abstract

Datasets


Results from the Paper


Ranked #6 on Image Classification on OmniBenchmark (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification ImageNet ResNeXt-101 32x8d (semi-weakly sup.) Top 1 Accuracy 84.3% # 301
Number of params 88M # 827
Image Classification ImageNet ResNeXt-101 32x4d (semi-weakly sup.) Top 1 Accuracy 83.4% # 389
Number of params 42M # 681
Image Classification ImageNet ResNeXt-101 32x16d (semi-weakly sup.) Top 1 Accuracy 84.8% # 266
Number of params 193M # 887
Image Classification OmniBenchmark IG-1B Average Top-1 Accuracy 40.4 # 6

Methods