Few-Shot Learning

1036 papers with code • 22 benchmarks • 41 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

Latest papers with no code

When LLMs are Unfit Use FastFit: Fast and Effective Text Classification with Many Classes

no code yet • 18 Apr 2024

We present FastFit, a method, and a Python package design to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes.

Stance Detection on Social Media with Fine-Tuned Large Language Models

no code yet • 18 Apr 2024

This study emphasizes the potential of LLMs in stance detection and calls for more extensive research in this field.

Many-Shot In-Context Learning

no code yet • 17 Apr 2024

Finally, we demonstrate that, unlike few-shot learning, many-shot learning is effective at overriding pretraining biases and can learn high-dimensional functions with numerical inputs.

Improving Recall of Large Language Models: A Model Collaboration Approach for Relational Triple Extraction

no code yet • 15 Apr 2024

The framework includes an evaluation model that can extract related entity pairs with high precision.

CryoMAE: Few-Shot Cryo-EM Particle Picking with Masked Autoencoders

no code yet • 15 Apr 2024

Cryo-electron microscopy (cryo-EM) emerges as a pivotal technology for determining the architecture of cells, viruses, and protein assemblies at near-atomic resolution.

GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning

no code yet • 14 Apr 2024

The emergence of Large Language Models (LLMs) with capabilities like In-Context Learning (ICL) has ushered in new possibilities for data generation across various domains while minimizing the need for extensive data collection and modeling techniques.

PM2: A New Prompting Multi-modal Model Paradigm for Few-shot Medical Image Classification

no code yet • 13 Apr 2024

The other is to classification on feature distribution of visual tokens from vision encoder.

ChatGPT and general-purpose AI count fruits in pictures surprisingly well

no code yet • 12 Apr 2024

We interpret these results as two surprises for deep learning users in applied domains: a foundation model with few-shot domain-specific learning can drastically save time and effort compared to the conventional approach, and ChatGPT can reveal a relatively good performance.

Sketch-Plan-Generalize: Continual Few-Shot Learning of Inductively Generalizable Spatial Concepts for Language-Guided Robot Manipulation

no code yet • 11 Apr 2024

Our goal is to build embodied agents that can learn inductively generalizable spatial concepts in a continual manner, e. g, constructing a tower of a given height.

Using Few-Shot Learning to Classify Primary Lung Cancer and Other Malignancy with Lung Metastasis in Cytological Imaging via Endobronchial Ultrasound Procedures

no code yet • 9 Apr 2024

Batch Spectral Regularization (BSR) will be incorporated as a loss update parameter, and the Finetune method of PMF will be modified.