Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboard:
( Image credit: Self-Supervised Semi-Supervised Learning )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
#9 best model for Conditional Image Generation on CIFAR-10
Using CowMask as the augmentation method in semi-supervised consistency regularization, we establish a new state-of-the-art result on Imagenet with 10% labeled data, with a top-5 error of 8. 76% and top-1 error of 26. 06%.
#2 best model for Semi-Supervised Image Classification on ImageNet - 10% labeled data
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss.
#5 best model for Semi-Supervised Image Classification on STL-10, 1000 Labels
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
#3 best model for Domain Generalization on ImageNet-A
In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
#2 best model for Text Classification on IMDb
Without changing the network architecture, Mean Teacher achieves an error rate of 4. 35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels.
#5 best model for Semi-Supervised Image Classification on SVHN, 250 Labels
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets.
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets.
#4 best model for Semi-Supervised Image Classification on cifar10, 250 Labels
We combine supervised learning with unsupervised learning in deep neural networks.
#17 best model for Semi-Supervised Image Classification on CIFAR-10, 4000 Labels