Few-Shot Learning

1071 papers with code • 23 benchmarks • 43 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

Modularized Networks for Few-shot Hateful Meme Detection

social-ai-studio/mod_hate 19 Feb 2024

We then use the few available annotated samples to train a module composer, which assigns weights to the LoRA modules based on their relevance.

5
19 Feb 2024

Self-Augmented In-Context Learning for Unsupervised Word Translation

cambridgeltl/sail-bli 15 Feb 2024

Recent work has shown that, while large language models (LLMs) demonstrate strong word translation or bilingual lexicon induction (BLI) capabilities in few-shot setups, they still cannot match the performance of 'traditional' mapping-based approaches in the unsupervised scenario where no seed translation pairs are available, especially for lower-resource languages.

1
15 Feb 2024

Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models

ggjy/vision_weak_to_strong 6 Feb 2024

Recent advancements in large language models have sparked interest in their extraordinary and near-superhuman capabilities, leading researchers to explore methods for evaluating and optimizing these abilities, which is called superalignment.

35
06 Feb 2024

Large Language Models to Enhance Bayesian Optimization

tennisonliu/llambo 6 Feb 2024

Bayesian optimization (BO) is a powerful approach for optimizing complex and expensive-to-evaluate black-box functions.

17
06 Feb 2024

BECLR: Batch Enhanced Contrastive Few-Shot Learning

stypoumic/beclr ICLR 2024

Learning quickly from very few labeled samples is a fundamental attribute that separates machines and humans in the era of deep representation learning.

12
04 Feb 2024

Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities

NVIDIA/audio-flamingo 2 Feb 2024

Augmenting large language models (LLMs) to understand audio -- including non-speech sounds and non-verbal speech -- is critically important for diverse real-world applications of LLMs.

21
02 Feb 2024

On the Transferability of Large-Scale Self-Supervision to Few-Shot Audio Classification

CHeggan/Few-Shot-Classification-for-Audio-Evaluation 2 Feb 2024

In recent years, self-supervised learning has excelled for its capacity to learn robust feature representations from unlabelled data.

5
02 Feb 2024

HyperPlanes: Hypernetwork Approach to Rapid NeRF Adaptation

gmum/hyperplanes 2 Feb 2024

Neural radiance fields (NeRFs) are a widely accepted standard for synthesizing new 3D object views from a small number of base images.

2
02 Feb 2024

SymbolicAI: A framework for logic-based approaches combining generative models and solvers

ExtensityAI/symbolicai 1 Feb 2024

Through these operations based on in-context learning our framework enables the creation and evaluation of explainable computational graphs.

905
01 Feb 2024

Reviving Undersampling for Long-Tailed Learning

yuhao318/atomic_feature_mimicking 30 Jan 2024

In this paper, we aim to enhance the accuracy of the worst-performing categories and utilize the harmonic mean and geometric mean to assess the model's performance.

5
30 Jan 2024