Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
( Image credit: Self-Supervised Semi-Supervised Learning )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
Ranked #9 on Conditional Image Generation on CIFAR-10
Using it to provide perturbations for semi-supervised consistency regularization, we achieve a state-of-the-art result on ImageNet with 10% labeled data, with a top-5 error of 8. 76% and top-1 error of 26. 06%.
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss.
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
Ranked #3 on Domain Generalization on ImageNet-A
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.
Ranked #2 on Self-Supervised Image Classification on ImageNet
In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Ranked #1 on Sentiment Analysis on Yelp Binary classification
The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLR), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
Without changing the network architecture, Mean Teacher achieves an error rate of 4. 35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels.
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets.