Few-Shot Learning

1041 papers with code • 22 benchmarks • 41 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

Most implemented papers

Leveraging the Feature Distribution in Transfer-based Few-Shot Learning

yhu01/PT-MAP 6 Jun 2020

Few-shot classification is a challenging problem due to the uncertainty caused by using few labelled samples.

Free Lunch for Few-shot Learning: Distribution Calibration

ShuoYang-1998/ICLR2021-Oral_Distribution_Calibration ICLR 2021

In this paper, we calibrate the distribution of these few-sample classes by transferring statistics from the classes with sufficient examples, then an adequate number of examples can be sampled from the calibrated distribution to expand the inputs to the classifier.

Meta-learning with differentiable closed-form solvers

learnables/learn2learn ICLR 2019

The main idea is to teach a deep network to use standard machine learning tools, such as ridge regression, as part of its own internal model, enabling it to quickly adapt to novel data.

Meta-Learning with Latent Embedding Optimization

deepmind/leo ICLR 2019

We show that it is possible to bypass these limitations by learning a data-dependent latent generative representation of model parameters, and performing gradient-based meta-learning in this low-dimensional latent space.

Few-Shot Learning via Embedding Adaptation with Set-to-Set Functions

Sha-Lab/FEAT CVPR 2020

Many few-shot learning methods address this challenge by learning an instance embedding function from seen classes and apply the function to instances from unseen classes with limited labels.

DeepEMD: Differentiable Earth Mover's Distance for Few-Shot Learning

icoz69/DeepEMD 15 Mar 2020

We employ the Earth Mover's Distance (EMD) as a metric to compute a structural distance between dense image representations to determine image relevance.

Calibrate Before Use: Improving Few-Shot Performance of Language Models

tonyzhaozh/few-shot-learning 19 Feb 2021

We show that this type of few-shot learning can be unstable: the choice of prompt format, training examples, and even the order of the training examples can cause accuracy to vary from near chance to near state-of-the-art.

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

Flamingo: a Visual Language Model for Few-Shot Learning

mlfoundations/open_flamingo DeepMind 2022

Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research.

Dynamic Few-Shot Visual Learning without Forgetting

gidariss/FewShotWithoutForgetting CVPR 2018

In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories).