Few-Shot Learning
1071 papers with code • 23 benchmarks • 43 datasets
Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.
Source: Penalty Method for Inversion-Free Deep Bilevel Optimization
Libraries
Use these libraries to find Few-Shot Learning models and implementationsSubtasks
Latest papers
Modularized Networks for Few-shot Hateful Meme Detection
We then use the few available annotated samples to train a module composer, which assigns weights to the LoRA modules based on their relevance.
Self-Augmented In-Context Learning for Unsupervised Word Translation
Recent work has shown that, while large language models (LLMs) demonstrate strong word translation or bilingual lexicon induction (BLI) capabilities in few-shot setups, they still cannot match the performance of 'traditional' mapping-based approaches in the unsupervised scenario where no seed translation pairs are available, especially for lower-resource languages.
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Recent advancements in large language models have sparked interest in their extraordinary and near-superhuman capabilities, leading researchers to explore methods for evaluating and optimizing these abilities, which is called superalignment.
Large Language Models to Enhance Bayesian Optimization
Bayesian optimization (BO) is a powerful approach for optimizing complex and expensive-to-evaluate black-box functions.
BECLR: Batch Enhanced Contrastive Few-Shot Learning
Learning quickly from very few labeled samples is a fundamental attribute that separates machines and humans in the era of deep representation learning.
Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities
Augmenting large language models (LLMs) to understand audio -- including non-speech sounds and non-verbal speech -- is critically important for diverse real-world applications of LLMs.
On the Transferability of Large-Scale Self-Supervision to Few-Shot Audio Classification
In recent years, self-supervised learning has excelled for its capacity to learn robust feature representations from unlabelled data.
HyperPlanes: Hypernetwork Approach to Rapid NeRF Adaptation
Neural radiance fields (NeRFs) are a widely accepted standard for synthesizing new 3D object views from a small number of base images.
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
Through these operations based on in-context learning our framework enables the creation and evaluation of explainable computational graphs.
Reviving Undersampling for Long-Tailed Learning
In this paper, we aim to enhance the accuracy of the worst-performing categories and utilize the harmonic mean and geometric mean to assess the model's performance.