Semi-Supervised Image Classification
124 papers with code • 58 benchmarks • 13 datasets
Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
- An overview of proxy-label approaches for semi-supervised learning - Sebastian Ruder
- Semi-Supervised Learning in Computer Vision - Amit Chaudhary
( Image credit: Self-Supervised Semi-Supervised Learning )
Libraries
Use these libraries to find Semi-Supervised Image Classification models and implementationsLatest papers with no code
Improving Face Recognition by Clustering Unlabeled Faces in the Wild
While deep face recognition has benefited significantly from large-scale labeled data, current research is focused on leveraging unlabeled data to further boost performance, reducing the cost of human annotation.
Consistency Regularization with Generative Adversarial Networks for Semi-Supervised Learning
Our experiments show that this new composite consistency regularization based semi-GAN significantly improves its performance and achieves new state-of-the-art performance among GAN-based SSL approaches.
Adversarial Transformations for Semi-Supervised Learning
We propose a Regularization framework based on Adversarial Transformations (RAT) for semi-supervised learning.
Pseudo-Labeling Curriculum for Unsupervised Domain Adaptation
To learn target discriminative representations, using pseudo-labels is a simple yet effective approach for unsupervised domain adaptation.
Energy Models for Better Pseudo-Labels: Improving Semi-Supervised Classification with the 1-Laplacian Graph Energy
Semi-supervised classification is a great focus of interest, as in real-world scenarios obtaining labels is expensive, time-consuming and might require expert knowledge.
Manifold Graph with Learned Prototypes for Semi-Supervised Image Classification
We then show that when combined with these regularizers, the proposed method facilitates the propagation of information from generated prototypes to image data to further improve results.
Semi-supervised Sequence-to-sequence ASR using Unpaired Speech and Text
Such techniques derive training procedures and losses able to leverage unpaired speech and/or text data by combining ASR with Text-to-Speech (TTS) models.
Unsupervised Learning using Pretrained CNN and Associative Memory Bank
In this paper, we present a new architecture and an approach for unsupervised object recognition that addresses the above mentioned problem with fine tuning associated with pretrained CNN-based supervised deep learning approaches while allowing automated feature extraction.
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
Effective convolutional neural networks are trained on large sets of labeled data.
Unsupervised High-level Feature Learning by Ensemble Projection for Semi-supervised Image Classification and Image Clustering
Hence, in the spirit of ensemble learning we create a set of such training sets which are all diverse, leading to diverse classifiers.