Semi-Supervised Image Classification
124 papers with code • 58 benchmarks • 13 datasets
Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
- An overview of proxy-label approaches for semi-supervised learning - Sebastian Ruder
- Semi-Supervised Learning in Computer Vision - Amit Chaudhary
( Image credit: Self-Supervised Semi-Supervised Learning )
Libraries
Use these libraries to find Semi-Supervised Image Classification models and implementationsMost implemented papers
Unsupervised Data Augmentation for Consistency Training
In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much.
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets.
Semi-Supervised Learning with Ladder Networks
We combine supervised learning with unsupervised learning in deep neural networks.
Meta Pseudo Labels
We present Meta Pseudo Labels, a semi-supervised learning method that achieves a new state-of-the-art top-1 accuracy of 90. 2% on ImageNet, which is 1. 6% better than the existing state-of-the-art.
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Without changing the network architecture, Mean Teacher achieves an error rate of 4. 35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels.
Big Self-Supervised Models are Strong Semi-Supervised Learners
The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
Temporal Ensembling for Semi-Supervised Learning
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled.
Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss.
Self-Supervised Learning of Pretext-Invariant Representations
The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images.