Few-Shot Learning
1041 papers with code • 22 benchmarks • 41 datasets
Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.
Source: Penalty Method for Inversion-Free Deep Bilevel Optimization
Libraries
Use these libraries to find Few-Shot Learning models and implementationsSubtasks
Latest papers
Cross-domain Multi-modal Few-shot Object Detection via Rich Text
Cross-modal feature extraction and integration have led to steady performance improvements in few-shot learning tasks due to generating richer features.
MatchSeg: Towards Better Segmentation via Reference Image Matching
Few-shot learning aims to overcome the need for annotated data by using a small labeled dataset, known as a support set, to guide predicting labels for new, unlabeled images, known as the query set.
Comprehensive Evaluation and Insights into the Use of Large Language Models in the Automation of Behavior-Driven Development Acceptance Test Formulation
Behavior-driven development (BDD) is an Agile testing methodology fostering collaboration among developers, QA analysts, and stakeholders.
Negative Yields Positive: Unified Dual-Path Adapter for Vision-Language Models
Recently, large-scale pre-trained Vision-Language Models (VLMs) have demonstrated great potential in learning open-world visual representations, and exhibit remarkable performance across a wide range of downstream tasks through efficient fine-tuning.
Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs
On complex downstream tasks with limited data, such as fluid flow simulations and fluid-structure interactions, we found CoDA-NO to outperform existing methods on the few-shot learning task by over $36\%$.
TaxoLLaMA: WordNet-based Model for Solving Multiple Lexical Sematic Tasks
It achieves 11 SotA results, 4 top-2 results out of 16 tasks for the Taxonomy Enrichment, Hypernym Discovery, Taxonomy Construction, and Lexical Entailment tasks.
ClinicalMamba: A Generative Clinical Language Model on Longitudinal Clinical Notes
The advancement of natural language processing (NLP) systems in healthcare hinges on language model ability to interpret the intricate information contained within clinical notes.
Discriminative Sample-Guided and Parameter-Efficient Feature Space Adaptation for Cross-Domain Few-Shot Learning
In this paper, we look at cross-domain few-shot classification which presents the challenging task of learning new classes in previously unseen domains with few labelled examples.
Task Attribute Distance for Few-Shot Learning: Theoretical Analysis and Applications
In this paper, we try to understand FSL by delving into two key questions: (1) How to quantify the relationship between \emph{training} and \emph{novel} tasks?
Few-shot Learner Parameterization by Diffusion Time-steps
To this end, we find an inductive bias that the time-steps of a Diffusion Model (DM) can isolate the nuanced class attributes, i. e., as the forward diffusion adds noise to an image at each time-step, nuanced attributes are usually lost at an earlier time-step than the spurious attributes that are visually prominent.