|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We aim to bridge the gap between typical human and machine-learning environments by extending the standard framework of few-shot learning to an online, continual setting.
In this paper, we argue that WSOL task is ill-posed with only image-level labels, and propose a new evaluation protocol where full supervision is limited to only a small held-out set not overlapping with the test set.
Building on these insights and on advances in self-supervised learning, we propose a transfer learning approach which constructs a metric embedding that clusters unlabeled prototypical samples and their augmentations closely together.
Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods, with further gains achieved by our second stage distillation process.
#3 best model for Few-Shot Image Classification on CIFAR-FS 5-way (5-shot)
Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples.
In this paper, we provide a framework for few-shot learning by introducing dynamic classifiers that are constructed from few samples.
#3 best model for Few-Shot Image Classification on CIFAR-FS 5-way (1-shot)
Generating the classification weights has been applied in many meta-learning methods for few shot image classification due to its simplicity and effectiveness.