Few-Shot Learning

1013 papers with code • 22 benchmarks • 41 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

Latest papers with no code

Investigating grammatical abstraction in language models using few-shot learning of novel noun gender

no code yet • 15 Mar 2024

Language models were tasked with learning the gender of a novel noun embedding from a few examples in one grammatical agreement context and predicting agreement in another, unseen context.

Search-based Optimisation of LLM Learning Shots for Story Point Estimation

no code yet • 13 Mar 2024

One of the ways Large Language Models (LLMs) are used to perform machine learning tasks is to provide them with a few examples before asking them to produce a prediction.

Segmentation of Knee Bones for Osteoarthritis Assessment: A Comparative Analysis of Supervised, Few-Shot, and Zero-Shot Learning Approaches

no code yet • 13 Mar 2024

These findings highlight the effectiveness of few-shot learning for semantic segmentation and the potential of zero-shot learning in enhancing classification models for knee osteoarthritis diagnosis.

Multi-Objective Optimization Using Adaptive Distributed Reinforcement Learning

no code yet • 13 Mar 2024

We test our algorithm in an ITS environment with edge cloud computing.

Rethinking ASTE: A Minimalist Tagging Scheme Alongside Contrastive Learning

no code yet • 12 Mar 2024

Aspect Sentiment Triplet Extraction (ASTE) is a burgeoning subtask of fine-grained sentiment analysis, aiming to extract structured sentiment triplets from unstructured textual data.

MENTOR: Multilingual tExt detectioN TOward leaRning by analogy

no code yet • 12 Mar 2024

Text detection is frequently used in vision-based mobile robots when they need to interpret texts in their surroundings to perform a given task.

Boosting keyword spotting through on-device learnable user speech characteristics

no code yet • 12 Mar 2024

Keyword spotting systems for always-on TinyML-constrained applications require on-site tuning to boost the accuracy of offline trained classifiers when deployed in unseen inference conditions.

Evaluating the Energy Efficiency of Few-Shot Learning for Object Detection in Industrial Settings

no code yet • 11 Mar 2024

In the ever-evolving era of Artificial Intelligence (AI), model performance has constituted a key metric driving innovation, leading to an exponential growth in model size and complexity.

ClinicalMamba: A Generative Clinical Language Model on Longitudinal Clinical Notes

no code yet • 9 Mar 2024

The advancement of natural language processing (NLP) systems in healthcare hinges on language model ability to interpret the intricate information contained within clinical notes.

DEEP-ICL: Definition-Enriched Experts for Language Model In-Context Learning

no code yet • 7 Mar 2024

It has long been assumed that the sheer number of parameters in large language models (LLMs) drives in-context learning (ICL) capabilities, enabling remarkable performance improvements by leveraging task-specific demonstrations.