Few-Shot Learning

1071 papers with code • 23 benchmarks • 43 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

ClinicalMamba: A Generative Clinical Language Model on Longitudinal Clinical Notes

whaleloops/clinicalmamba 9 Mar 2024

The advancement of natural language processing (NLP) systems in healthcare hinges on language model ability to interpret the intricate information contained within clinical notes.

2
09 Mar 2024

Discriminative Sample-Guided and Parameter-Efficient Feature Space Adaptation for Cross-Domain Few-Shot Learning

rashindrie/dipa 7 Mar 2024

In this paper, we look at cross-domain few-shot classification which presents the challenging task of learning new classes in previously unseen domains with few labelled examples.

4
07 Mar 2024

Task Attribute Distance for Few-Shot Learning: Theoretical Analysis and Applications

hu-my/taskattributedistance 6 Mar 2024

In this paper, we try to understand FSL by delving into two key questions: (1) How to quantify the relationship between \emph{training} and \emph{novel} tasks?

5
06 Mar 2024

Few-shot Learner Parameterization by Diffusion Time-steps

yue-zhongqi/tif 5 Mar 2024

To this end, we find an inductive bias that the time-steps of a Diffusion Model (DM) can isolate the nuanced class attributes, i. e., as the forward diffusion adds noise to an image at each time-step, nuanced attributes are usually lost at an earlier time-step than the spurious attributes that are visually prominent.

3
05 Mar 2024

Enhancing Information Maximization with Distance-Aware Contrastive Learning for Source-Free Cross-Domain Few-Shot Learning

xuhuali-mxj/im-dcl 4 Mar 2024

For this reason, this paper explores a Source-Free CDFSL (SF-CDFSL) problem, in which CDFSL is addressed through the use of existing pretrained models instead of training a model with source data, avoiding accessing source data.

1
04 Mar 2024

STAR: Constraint LoRA with Dynamic Active Learning for Data-Efficient Fine-Tuning of Large Language Models

callanwu/star 2 Mar 2024

For poor model calibration, we incorporate the regularization method during LoRA training to keep the model from being over-confident, and the Monte-Carlo dropout mechanism is employed to enhance the uncertainty estimation.

3
02 Mar 2024

FSL-Rectifier: Rectify Outliers in Few-Shot Learning via Test-Time Augmentation

wendybaiyunwei/fsl-rectifier 28 Feb 2024

Few-shot-learning (FSL) commonly requires a model to identify images (queries) that belong to classes unseen during training, based on a few labelled samples of the new classes (support set) as reference.

0
28 Feb 2024

Parameter-efficient Prompt Learning for 3D Point Cloud Understanding

auniquesun/ppt 24 Feb 2024

Finally, a lightweight PointAdapter module is arranged near target tasks to enhance prompt tuning for 3D point cloud understanding.

13
24 Feb 2024

Me LLaMA: Foundation Large Language Models for Medical Applications

bids-xu-lab/me-llama 20 Feb 2024

In response to this challenge, this study introduces Me-LLaMA, a novel medical LLM family that includes foundation models - Me-LLaMA 13/70B, along with their chat-enhanced versions - Me-LLaMA 13/70B-chat, developed through continual pre-training and instruction tuning of LLaMA2 using large medical datasets.

56
20 Feb 2024

Spatio-Temporal Few-Shot Learning via Diffusive Neural Network Generation

tsinghua-fib-lab/gpd 19 Feb 2024

Spatio-temporal modeling is foundational for smart city applications, yet it is often hindered by data scarcity in many cities and regions.

29
19 Feb 2024