Few-Shot Image Classification

202 papers with code • 88 benchmarks • 23 datasets

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically < 6 examples). The goal is to enable models to recognize and classify new images with minimal supervision and limited data, without having to train on large datasets. (typically < 6 examples)

( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Libraries

Use these libraries to find Few-Shot Image Classification models and implementations

Latest papers with no code

Feature Activation Map: Visual Explanation of Deep Learning Models for Image Classification

no code yet • 11 Jul 2023

However, all the CAM-based methods (e. g., CAM, Grad-CAM, and Relevance-CAM) can only be used for interpreting CNN models with fully-connected (FC) layers as a classifier.

FILM: How can Few-Shot Image Classification Benefit from Pre-Trained Language Models?

no code yet • 9 Jul 2023

Few-shot learning aims to train models that can be generalized to novel classes with only a few samples.

Distilling Self-Supervised Vision Transformers for Weakly-Supervised Few-Shot Classification & Segmentation

no code yet • CVPR 2023

For this mixed setup, we propose to improve the pseudo-labels using a pseudo-label enhancer that was trained using the available ground-truth pixel-level labels.

SuSana Distancia is all you need: Enforcing class separability in metric learning via two novel distance-based loss functions for few-shot image classification

no code yet • 15 May 2023

Few-shot learning is a challenging area of research that aims to learn new concepts with only a few labeled samples of data.

Strong Baselines for Parameter Efficient Few-Shot Fine-tuning

no code yet • 4 Apr 2023

Through our controlled empirical study, we have two main findings: (i) Fine-tuning just the LayerNorm parameters (which we call LN-Tune) during few-shot adaptation is an extremely strong baseline across ViTs pre-trained with both self-supervised and supervised objectives, (ii) For self-supervised ViTs, we find that simply learning a set of scaling parameters for each attention matrix (which we call AttnScale) along with a domain-residual adapter (DRA) module leads to state-of-the-art performance (while being $\sim\!$ 9$\times$ more parameter-efficient) on MD.

Boosting Few-Shot Text Classification via Distribution Estimation

no code yet • 26 Mar 2023

Distribution estimation has been demonstrated as one of the most effective approaches in dealing with few-shot image classification, as the low-level patterns and underlying representations can be easily transferred across different tasks in computer vision domain.

RotoGBML: Towards Out-of-Distribution Generalization for Gradient-Based Meta-Learning

no code yet • 12 Mar 2023

OOD exacerbates inconsistencies in magnitudes and directions of task gradients, which brings challenges for GBML to optimize the meta-knowledge by minimizing the sum of task gradients in each minibatch.

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

no code yet • CVPR 2023

Hence we advocate that the key of better performance lies in meaningful latent modality structures instead of perfect modality alignment.

CovidExpert: A Triplet Siamese Neural Network framework for the detection of COVID-19

no code yet • 17 Feb 2023

Patients with the COVID-19 infection may have pneumonia-like symptoms as well as respiratory problems which may harm the lungs.

Explore the Power of Dropout on Few-shot Learning

no code yet • 26 Jan 2023

The generalization power of the pre-trained model is the key for few-shot deep learning.