Few-Shot Image Classification

202 papers with code • 88 benchmarks • 23 datasets

Few-Shot Image Classification is a computer vision task that involves training machine learning models to classify images into predefined categories using only a few labeled examples of each category (typically < 6 examples). The goal is to enable models to recognize and classify new images with minimal supervision and limited data, without having to train on large datasets. (typically < 6 examples)

( Image credit: Learning Embedding Adaptation for Few-Shot Learning )

Libraries

Use these libraries to find Few-Shot Image Classification models and implementations

Most implemented papers

A Simple Neural Attentive Meta-Learner

eambutu/snail-pytorch ICLR 2018

Deep neural networks excel in regimes with large amounts of data, but tend to struggle when data is scarce or when they need to adapt quickly to changes in the task.

Dynamic Few-Shot Visual Learning without Forgetting

gidariss/FewShotWithoutForgetting CVPR 2018

In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories).

Edge-labeling Graph Neural Network for Few-shot Learning

khy0809/fewshot-egnn CVPR 2019

In this paper, we propose a novel edge-labeling graph neural network (EGNN), which adapts a deep neural network on the edge-labeling graph, for few-shot learning.

Harnessing the Power of Infinitely Wide Deep Nets on Small-data Tasks

LeoYu/neural-tangent-kernel-UCI ICLR 2020

On VOC07 testbed for few-shot image classification tasks on ImageNet with transfer learning (Goyal et al., 2019), replacing the linear SVM currently used with a Convolutional NTK SVM consistently improves performance.

PanGu-$α$: Large-scale Autoregressive Pretrained Chinese Language Models with Auto-parallel Computation

mindspore-ai/models 26 Apr 2021

To enhance the generalization ability of PanGu-$\alpha$, we collect 1. 1TB high-quality Chinese data from a wide range of domains to pretrain the model.

Cross-domain Few-shot Learning with Task-specific Adapters

google-research/meta-dataset CVPR 2022

In this paper, we look at the problem of cross-domain few-shot classification that aims to learn a classifier from previously unseen classes and domains with few labeled samples.

Self-Supervision Can Be a Good Few-Shot Learner

bbbdylan/unisiam 19 Jul 2022

Specifically, we maximize the mutual information (MI) of instances and their representations with a low-bias MI estimator to perform self-supervised pre-training.

Improving ProtoNet for Few-Shot Video Object Recognition: Winner of ORBIT Challenge 2022

guliisgreat/orbit-2022-winner-method 1 Oct 2022

In this work, we present the winning solution for ORBIT Few-Shot Video Object Recognition Challenge 2022.

TADAM: Task dependent adaptive metric for improved few-shot learning

yaoyao-liu/mini-imagenet-tools NeurIPS 2018

We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space.

TextCaps : Handwritten Character Recognition with Very Small Datasets

vinojjayasundara/textcaps 17 Apr 2019

Our system is useful in character recognition for localized languages that lack much labeled training data and even in other related more general contexts such as object recognition.