Few-Shot Learning
1013 papers with code • 22 benchmarks • 41 datasets
Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.
Source: Penalty Method for Inversion-Free Deep Bilevel Optimization
Libraries
Use these libraries to find Few-Shot Learning models and implementationsSubtasks
Most implemented papers
How to train your MAML
The field of few-shot learning has recently seen substantial advancements.
Making Pre-trained Language Models Better Few-shot Learners
We present LM-BFF--better few-shot fine-tuning of language models--a suite of simple and complementary techniques for fine-tuning language models on a small number of annotated examples.
Big Transfer (BiT): General Visual Representation Learning
We conduct detailed analysis of the main components that lead to high transfer performance.
Data Augmentation Generative Adversarial Networks
The model, based on image conditional Generative Adversarial Networks, takes data from a source domain and learns to take any data item and generalise it to generate other within-class data items.
Meta-Learning with Differentiable Convex Optimization
We propose to use these predictors as base learners to learn representations for few-shot learning and show they offer better tradeoffs between feature size and performance across a range of few-shot recognition benchmarks.
Charting the Right Manifold: Manifold Mixup for Few-shot Learning
A recent regularization technique - Manifold Mixup focuses on learning a general-purpose representation, robust to small changes in the data distribution.
Compact Bilinear Pooling
Bilinear models has been shown to achieve impressive performance on a wide range of visual tasks, such as semantic segmentation, fine grained recognition and face recognition.
Few-Shot Learning with Graph Neural Networks
We propose to study the problem of few-shot learning with the prism of inference on a partially observed graphical model, constructed from a collection of input images whose label can be either observed or not.
Meta-Learning with Implicit Gradients
By drawing upon implicit differentiation, we develop the implicit MAML algorithm, which depends only on the solution to the inner level optimization and not the path taken by the inner loop optimizer.
SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning
Few-shot learners aim to recognize new object classes based on a small number of labeled training examples.