Browse SoTA > Computer Vision > Image Classification > Self-Supervised Image Classification

Self-Supervised Image Classification

17 papers with code · Computer Vision

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Leaderboards

Latest papers with code

Generative Pretraining from Pixels

Preprint 2020 openai/image-gpt

Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images.

SELF-SUPERVISED IMAGE CLASSIFICATION UNSUPERVISED REPRESENTATION LEARNING

492
17 Jul 2020

Big Self-Supervised Models are Strong Semi-Supervised Learners

17 Jun 2020google-research/simclr

The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLR), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.

SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION

1,002
17 Jun 2020

Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning

13 Jun 2020lucidrains/byol-pytorch

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION

155
13 Jun 2020

What makes for good views for contrastive learning

20 May 2020HobbitLong/PyContrast

Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning.

CONTRASTIVE LEARNING DATA AUGMENTATION INSTANCE SEGMENTATION OBJECT DETECTION REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SEMANTIC SEGMENTATION

516
20 May 2020

Improved Baselines with Momentum Contrastive Learning

9 Mar 2020facebookresearch/moco

Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.

CONTRASTIVE LEARNING DATA AUGMENTATION REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

1,178
09 Mar 2020

Self-Supervised Learning of Pretext-Invariant Representations

CVPR 2020 HobbitLong/PyContrast

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images.

OBJECT DETECTION REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION

516
04 Dec 2019
1,178
13 Nov 2019

Self-labelling via simultaneous clustering and representation learning

ICLR 2020 yukimasano/self-label

Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks.

IMAGE CLUSTERING REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING

151
13 Nov 2019

On Mutual Information Maximization for Representation Learning

ICLR 2020 google-research/google-research

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

11,206
31 Jul 2019