Semi-Supervised Image Classification
124 papers with code • 58 benchmarks • 13 datasets
Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
- An overview of proxy-label approaches for semi-supervised learning - Sebastian Ruder
- Semi-Supervised Learning in Computer Vision - Amit Chaudhary
( Image credit: Self-Supervised Semi-Supervised Learning )
Libraries
Use these libraries to find Semi-Supervised Image Classification models and implementationsLatest papers with no code
DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples
Extensive experiments on four standard SSL benchmarks show that DP-SSL can provide reliable labels for unlabeled data and achieve better classification performance on test sets than existing SSL methods, especially when only a small number of labeled samples are available.
Dash: Semi-Supervised Learning with Dynamic Thresholding
In this work we develop a simple yet powerful framework, whose key idea is to select a subset of training examples from the unlabeled data when performing existing SSL methods so that only the unlabeled examples with pseudo labels related to the labeled data will be used to train models.
Self-Supervised Wasserstein Pseudo-Labeling for Semi-Supervised Image Classification
The goal is to use Wasserstein metric to provide pseudo labels for the unlabeled images to train a Convolutional Neural Networks (CNN) in a Semi-Supervised Learning (SSL) manner for the classification task.
Diffusion-Based Representation Learning
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective and thus encodes the information needed for denoising.
Vanishing Twin GAN: How training a weak Generative Adversarial Network can improve semi-supervised image classification
By training a weak GAN and using its generated output image parallel to the regular GAN, the Vanishing Twin training improves semi-supervised image classification where image similarity can hurt classification tasks.
Multi-class Generative Adversarial Nets for Semi-supervised Image Classification
We propose a modification to the traditional training of GANs that allows for improved multi-class classification in similar classes of images in a semi-supervised learning framework.
SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning
This paper introduces SelfMatch, a semi-supervised learning method that combines the power of contrastive self-supervised learning and consistency regularization.
LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching
Although data is abundant, data labeling is expensive.
Multi-Task Curriculum Framework for Open-Set Semi-Supervised Learning
Semi-supervised learning (SSL) has been proposed to leverage unlabeled data for training powerful models when only limited labeled data is available.
Improving Face Recognition by Clustering Unlabeled Faces in the Wild
While deep face recognition has benefited significantly from large-scale labeled data, current research is focused on leveraging unlabeled data to further boost performance, reducing the cost of human annotation.