Few-Shot Learning

1059 papers with code • 23 benchmarks • 42 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

Me LLaMA: Foundation Large Language Models for Medical Applications

bids-xu-lab/me-llama 20 Feb 2024

In response to this challenge, this study introduces Me-LLaMA, a novel medical LLM family that includes foundation models - Me-LLaMA 13/70B, along with their chat-enhanced versions - Me-LLaMA 13/70B-chat, developed through continual pre-training and instruction tuning of LLaMA2 using large medical datasets.

49
20 Feb 2024

Spatio-Temporal Few-Shot Learning via Diffusive Neural Network Generation

tsinghua-fib-lab/gpd 19 Feb 2024

Spatio-temporal modeling is foundational for smart city applications, yet it is often hindered by data scarcity in many cities and regions.

28
19 Feb 2024

Modularized Networks for Few-shot Hateful Meme Detection

social-ai-studio/mod_hate 19 Feb 2024

We then use the few available annotated samples to train a module composer, which assigns weights to the LoRA modules based on their relevance.

4
19 Feb 2024

Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models

ggjy/vision_weak_to_strong 6 Feb 2024

Recent advancements in large language models have sparked interest in their extraordinary and near-superhuman capabilities, leading researchers to explore methods for evaluating and optimizing these abilities, which is called superalignment.

34
06 Feb 2024

Large Language Models to Enhance Bayesian Optimization

tennisonliu/llambo 6 Feb 2024

Bayesian optimization (BO) is a powerful approach for optimizing complex and expensive-to-evaluate black-box functions.

14
06 Feb 2024

BECLR: Batch Enhanced Contrastive Few-Shot Learning

stypoumic/beclr ICLR 2024

Learning quickly from very few labeled samples is a fundamental attribute that separates machines and humans in the era of deep representation learning.

12
04 Feb 2024

On the Transferability of Large-Scale Self-Supervision to Few-Shot Audio Classification

CHeggan/Few-Shot-Classification-for-Audio-Evaluation 2 Feb 2024

In recent years, self-supervised learning has excelled for its capacity to learn robust feature representations from unlabelled data.

4
02 Feb 2024

HyperPlanes: Hypernetwork Approach to Rapid NeRF Adaptation

gmum/hyperplanes 2 Feb 2024

Neural radiance fields (NeRFs) are a widely accepted standard for synthesizing new 3D object views from a small number of base images.

2
02 Feb 2024

SymbolicAI: A framework for logic-based approaches combining generative models and solvers

ExtensityAI/symbolicai 1 Feb 2024

We conclude by introducing a quality measure and its empirical score for evaluating these computational graphs, and propose a benchmark that compares various state-of-the-art LLMs across a set of complex workflows.

895
01 Feb 2024

Reviving Undersampling for Long-Tailed Learning

yuhao318/btm 30 Jan 2024

In this paper, we aim to enhance the accuracy of the worst-performing categories and utilize the harmonic mean and geometric mean to assess the model's performance.

3
30 Jan 2024