Semi-Supervised Image Classification
121 papers with code • 46 benchmarks • 13 datasets
Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
- An overview of proxy-label approaches for semi-supervised learning - Sebastian Ruder
- Semi-Supervised Learning in Computer Vision - Amit Chaudhary
( Image credit: Self-Supervised Semi-Supervised Learning )
Libraries
Use these libraries to find Semi-Supervised Image Classification models and implementationsLatest papers with no code
Color-$S^{4}L$: Self-supervised Semi-supervised Learning with Image Colorization
This work addresses the problem of semi-supervised image classification tasks with the integration of several effective self-supervised pretext tasks.
How To Overcome Confirmation Bias in Semi-Supervised Image Classification By Active Learning
We conduct experiments with SSL and AL on simulated data challenges and find that random sampling does not mitigate confirmation bias and, in some cases, leads to worse performance than supervised learning.
Graph Convolutional Networks based on Manifold Learning for Semi-Supervised Image Classification
In spite of many advances, most of the approaches require a large amount of labeled data, which is often not available, due to costs and difficulties of manual labeling processes.
Semi-MAE: Masked Autoencoders for Semi-supervised Vision Transformers
To alleviate this issue, inspired by masked autoencoder (MAE), which is a data-efficient self-supervised learner, we propose Semi-MAE, a pure ViT-based SSL framework consisting of a parallel MAE branch to assist the visual representation learning and make the pseudo labels more accurate.
Self Meta Pseudo Labels: Meta Pseudo Labels Without The Teacher
We present Self Meta Pseudo Labels, a novel semi-supervised learning method similar to Meta Pseudo Labels but without the teacher model.
Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated Learning Framework
In this paper, we firstly reveal the fact that the federated ADMM is essentially a client-variance-reduced algorithm.
Contrastive Regularization for Semi-Supervised Learning
Consistency regularization on label predictions becomes a fundamental technique in semi-supervised learning, but it still requires a large number of training iterations for high performance.
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform the supervised baseline in a like-for-like comparison over a range of ResNet architectures.
Towards Discovering the Effectiveness of Moderately Confident Samples for Semi-Supervised Learning
To answer these problems, we propose a novel Taylor expansion inspired filtration (TEIF) framework, which admits the samples of moderate confidence with similar feature or gradient to the respective one averaged over the labeled and highly confident unlabeled data.
DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples
Extensive experiments on four standard SSL benchmarks show that DP-SSL can provide reliable labels for unlabeled data and achieve better classification performance on test sets than existing SSL methods, especially when only a small number of labeled samples are available.