Browse SoTA > Computer Vision > Image Classification > Self-Supervised Image Classification

Self-Supervised Image Classification

17 papers with code ยท Computer Vision

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Leaderboards

Latest papers without code

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

17 Jun 2020

In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much.

CONTRASTIVE LEARNING DATA AUGMENTATION SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION

Prototypical Contrastive Learning of Unsupervised Representations

11 May 2020

This paper presents Prototypical Contrastive Learning (PCL), an unsupervised representation learning method that addresses the fundamental limitations of instance-wise contrastive learning.

CONTRASTIVE LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION UNSUPERVISED REPRESENTATION LEARNING

Data-Efficient Image Recognition with Contrastive Predictive Coding

ICLR 2020

Human observers can learn to recognize new categories of objects from a handful of examples, yet doing so with machine perception remains an open challenge.

OBJECT DETECTION SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION

0-1 phase transitions in sparse spiked matrix estimation

12 Nov 2019

We consider statistical models of estimation of a rank-one matrix (the spike) corrupted by an additive gaussian noise matrix in the sparse limit.

PARTIAL DOMAIN ADAPTATION SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION SKELETON BASED ACTION RECOGNITION

Local Aggregation for Unsupervised Learning of Visual Embeddings

ICCV 2019

Unsupervised approaches to learning in neural networks are of substantial interest for furthering artificial intelligence, both because they would enable the training of networks without the need for large numbers of expensive annotations, and because they would be better models of the kind of general-purpose learning deployed by humans.

OBJECT DETECTION OBJECT RECOGNITION SCENE RECOGNITION SELF-SUPERVISED IMAGE CLASSIFICATION TRANSFER LEARNING

Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey

16 Feb 2019

This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos.

SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING