Semi-Supervised Image Classification
124 papers with code • 58 benchmarks • 13 datasets
Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
- An overview of proxy-label approaches for semi-supervised learning - Sebastian Ruder
- Semi-Supervised Learning in Computer Vision - Amit Chaudhary
( Image credit: Self-Supervised Semi-Supervised Learning )
Libraries
Use these libraries to find Semi-Supervised Image Classification models and implementationsMost implemented papers
VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning
Recent self-supervised methods for image representation learning are based on maximizing the agreement between embedding vectors from different views of the same image.
Unsupervised Feature Learning via Non-Parametric Instance Discrimination
Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so.
Interpolation Consistency Training for Semi-Supervised Learning
We introduce Interpolation Consistency Training (ICT), a simple and computation efficient algorithm for training Deep Neural Networks in the semi-supervised learning paradigm.
Data-Efficient Image Recognition with Contrastive Predictive Coding
Human observers can learn to recognize new categories of images from a handful of examples, yet doing so with artificial ones remains an open challenge.
Large Scale Adversarial Representation Learning
We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation.
Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples
This paper proposes a novel method of learning by predicting view assignments with support samples (PAWS).
With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations
On semi-supervised learning benchmarks we improve performance significantly when only 1% ImageNet labels are available, from 53. 8% to 56. 5%.
FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning
Semi-supervised Learning (SSL) has witnessed great success owing to the impressive performances brought by various methods based on pseudo labeling and consistency regularization.
USB: A Unified Semi-supervised Learning Benchmark for Classification
We further provide the pre-trained versions of the state-of-the-art neural models for CV tasks to make the cost affordable for further tuning.
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
Distribution alignment encourages the marginal distribution of predictions on unlabeled data to be close to the marginal distribution of ground-truth labels.