Few-Shot Image Classification
201 papers with code • 88 benchmarks • 23 datasets
Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically < 6 examples). The goal is to enable models to recognize and classify new images with minimal supervision and limited data, without having to train on large datasets. (typically < 6 examples)
( Image credit: Learning Embedding Adaptation for Few-Shot Learning )
Libraries
Use these libraries to find Few-Shot Image Classification models and implementationsDatasets
Subtasks
Latest papers
Unleashing the Power of Meta-tuning for Few-shot Generalization Through Sparse Interpolated Experts
Conventional wisdom suggests parameter-efficient fine-tuning of foundation models as the state-of-the-art method for transfer learning in vision, replacing the rich literature of alternatives such as meta-learning.
BECLR: Batch Enhanced Contrastive Few-Shot Learning
Learning quickly from very few labeled samples is a fundamental attribute that separates machines and humans in the era of deep representation learning.
RAFIC: Retrieval-Augmented Few-shot Image Classification
Few-shot image classification is the task of classifying unseen images to one of N mutually exclusive classes, using only a small number of training examples for each class.
Large Language Models are Good Prompt Learners for Low-Shot Image Classification
Thus, we propose LLaMP, Large Language Models as Prompt learners, that produces adaptive prompts for the CLIP text encoder, establishing it as the connecting bridge.
Diversified in-domain synthesis with efficient fine-tuning for few-shot classification
Few-shot image classification aims to learn an image classifier using only a small set of labeled examples per class.
Are LSTMs Good Few-Shot Learners?
Meta-learning overcomes this limitation by learning how to learn.
Context-Aware Meta-Learning
Large Language Models like ChatGPT demonstrate a remarkable capacity to learn new concepts during inference without any fine-tuning.
Subspace Adaptation Prior for Few-Shot Learning
Gradient-based meta-learning techniques aim to distill useful prior knowledge from a set of training tasks such that new tasks can be learned more efficiently with gradient descent.
SemiReward: A General Reward Model for Semi-supervised Learning
The main challenge is how to distinguish high-quality pseudo labels against the confirmation bias.
Logarithm-transform aided Gaussian Sampling for Few-Shot Learning
These methods rely on transforming the distributions of experimental data to approximate Gaussian distributions for their functioning.