Semi-Supervised Image Classification

121 papers with code • 46 benchmarks • 13 datasets

Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.

You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:

( Image credit: Self-Supervised Semi-Supervised Learning )

Libraries

Use these libraries to find Semi-Supervised Image Classification models and implementations
7 papers
2,716
6 papers
1,347
See all 16 libraries.

Latest papers with no code

Color-$S^{4}L$: Self-supervised Semi-supervised Learning with Image Colorization

no code yet • 8 Jan 2024

This work addresses the problem of semi-supervised image classification tasks with the integration of several effective self-supervised pretext tasks.

How To Overcome Confirmation Bias in Semi-Supervised Image Classification By Active Learning

no code yet • 16 Aug 2023

We conduct experiments with SSL and AL on simulated data challenges and find that random sampling does not mitigate confirmation bias and, in some cases, leads to worse performance than supervised learning.

Graph Convolutional Networks based on Manifold Learning for Semi-Supervised Image Classification

no code yet • 24 Apr 2023

In spite of many advances, most of the approaches require a large amount of labeled data, which is often not available, due to costs and difficulties of manual labeling processes.

Semi-MAE: Masked Autoencoders for Semi-supervised Vision Transformers

no code yet • 4 Jan 2023

To alleviate this issue, inspired by masked autoencoder (MAE), which is a data-efficient self-supervised learner, we propose Semi-MAE, a pure ViT-based SSL framework consisting of a parallel MAE branch to assist the visual representation learning and make the pseudo labels more accurate.

Self Meta Pseudo Labels: Meta Pseudo Labels Without The Teacher

no code yet • 27 Dec 2022

We present Self Meta Pseudo Labels, a novel semi-supervised learning method similar to Meta Pseudo Labels but without the teacher model.

Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated Learning Framework

no code yet • 3 Dec 2022

In this paper, we firstly reveal the fact that the federated ADMM is essentially a client-variance-reduced algorithm.

Contrastive Regularization for Semi-Supervised Learning

no code yet • 17 Jan 2022

Consistency regularization on label predictions becomes a fundamental technique in semi-supervised learning, but it still requires a large number of training iterations for high performance.

Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

no code yet • 13 Jan 2022

Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform the supervised baseline in a like-for-like comparison over a range of ResNet architectures.

Towards Discovering the Effectiveness of Moderately Confident Samples for Semi-Supervised Learning

no code yet • CVPR 2022

To answer these problems, we propose a novel Taylor expansion inspired filtration (TEIF) framework, which admits the samples of moderate confidence with similar feature or gradient to the respective one averaged over the labeled and highly confident unlabeled data.

DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples

no code yet • NeurIPS 2021

Extensive experiments on four standard SSL benchmarks show that DP-SSL can provide reliable labels for unlabeled data and achieve better classification performance on test sets than existing SSL methods, especially when only a small number of labeled samples are available.