Few-Shot Learning
1059 papers with code • 23 benchmarks • 42 datasets
Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.
Source: Penalty Method for Inversion-Free Deep Bilevel Optimization
Libraries
Use these libraries to find Few-Shot Learning models and implementationsSubtasks
Latest papers
Me LLaMA: Foundation Large Language Models for Medical Applications
In response to this challenge, this study introduces Me-LLaMA, a novel medical LLM family that includes foundation models - Me-LLaMA 13/70B, along with their chat-enhanced versions - Me-LLaMA 13/70B-chat, developed through continual pre-training and instruction tuning of LLaMA2 using large medical datasets.
Spatio-Temporal Few-Shot Learning via Diffusive Neural Network Generation
Spatio-temporal modeling is foundational for smart city applications, yet it is often hindered by data scarcity in many cities and regions.
Modularized Networks for Few-shot Hateful Meme Detection
We then use the few available annotated samples to train a module composer, which assigns weights to the LoRA modules based on their relevance.
Vision Superalignment: Weak-to-Strong Generalization for Vision Foundation Models
Recent advancements in large language models have sparked interest in their extraordinary and near-superhuman capabilities, leading researchers to explore methods for evaluating and optimizing these abilities, which is called superalignment.
Large Language Models to Enhance Bayesian Optimization
Bayesian optimization (BO) is a powerful approach for optimizing complex and expensive-to-evaluate black-box functions.
BECLR: Batch Enhanced Contrastive Few-Shot Learning
Learning quickly from very few labeled samples is a fundamental attribute that separates machines and humans in the era of deep representation learning.
On the Transferability of Large-Scale Self-Supervision to Few-Shot Audio Classification
In recent years, self-supervised learning has excelled for its capacity to learn robust feature representations from unlabelled data.
HyperPlanes: Hypernetwork Approach to Rapid NeRF Adaptation
Neural radiance fields (NeRFs) are a widely accepted standard for synthesizing new 3D object views from a small number of base images.
SymbolicAI: A framework for logic-based approaches combining generative models and solvers
We conclude by introducing a quality measure and its empirical score for evaluating these computational graphs, and propose a benchmark that compares various state-of-the-art LLMs across a set of complex workflows.
Reviving Undersampling for Long-Tailed Learning
In this paper, we aim to enhance the accuracy of the worst-performing categories and utilize the harmonic mean and geometric mean to assess the model's performance.