Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboard:
( Image credit: Self-Supervised Semi-Supervised Learning )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLR), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.
#2 best model for Self-Supervised Image Classification on ImageNet
First, a self-supervised task from representation learning is employed to obtain semantically meaningful features.
Using it to provide perturbations for semi-supervised consistency regularization, we achieve a state-of-the-art result on ImageNet with 10% labeled data, with a top-5 error of 8. 76% and top-1 error of 26. 06%.
#6 best model for Semi-Supervised Image Classification on ImageNet - 10% labeled data
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
#4 best model for Semi-Supervised Image Classification on ImageNet - 10% labeled data
In this paper, we propose the SubSpace Capsule Network (SCN) that exploits the idea of capsule networks to model possible variations in the appearance or implicitly defined properties of an entity through a group of capsule subspaces instead of simply grouping neurons to create capsules.
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.
Pairing stage calculates the error per sample, sorts the samples and pairs with strategy: hardest with easiest one, than mixing stage merges two samples using mixup, $x_1 + (1-\lambda)x_2$.
#16 best model for Image Classification on CIFAR-10
Normalizing flows transform a latent distribution through an invertible neural network for a flexible and pleasingly simple approach to generative modelling, while preserving an exact likelihood.
The performance of existing point cloud-based 3D object detection methods heavily relies on large-scale high-quality 3D annotations.