Few-Shot Learning

1043 papers with code • 22 benchmarks • 41 datasets

Few-Shot Learning is an example of meta-learning, where a learner is trained on several related tasks, during the meta-training phase, so that it can generalize well to unseen (but related) tasks with just few examples, during the meta-testing phase. An effective approach to the Few-Shot Learning problem is to learn a common representation for various tasks and train task specific classifiers on top of this representation.

Source: Penalty Method for Inversion-Free Deep Bilevel Optimization

Libraries

Use these libraries to find Few-Shot Learning models and implementations

Latest papers with no code

Empowering Large Language Models for Textual Data Augmentation

no code yet • 26 Apr 2024

With the capabilities of understanding and executing natural language instructions, Large language models (LLMs) can potentially act as a powerful tool for textual data augmentation.

Meta-Transfer Derm-Diagnosis: Exploring Few-Shot Learning and Transfer Learning for Skin Disease Classification in Long-Tail Distribution

no code yet • 25 Apr 2024

Moreover, our experiments, ranging from 2-way to 5-way classifications with up to 10 examples, showed a growing success rate for traditional transfer learning methods as the number of examples increased.

A comprehensive and easy-to-use multi-domain multi-task medical imaging meta-dataset (MedIMeta)

no code yet • 24 Apr 2024

While the field of medical image analysis has undergone a transformative shift with the integration of machine learning techniques, the main challenge of these techniques is often the scarcity of large, diverse, and well-annotated datasets.

Beyond Deepfake Images: Detecting AI-Generated Videos

no code yet • 24 Apr 2024

Recent advances in generative AI have led to the development of techniques to generate visually realistic synthetic video.

Graph Machine Learning in the Era of Large Language Models (LLMs)

no code yet • 23 Apr 2024

Meanwhile, graphs, especially knowledge graphs, are rich in reliable factual knowledge, which can be utilized to enhance the reasoning capabilities of LLMs and potentially alleviate their limitations such as hallucinations and the lack of explainability.

Identifying Fairness Issues in Automatically Generated Testing Content

no code yet • 23 Apr 2024

Natural language generation tools are powerful and effective for generating content.

Text-dependent Speaker Verification (TdSV) Challenge 2024: Challenge Evaluation Plan

no code yet • 20 Apr 2024

This document outlines the Text-dependent Speaker Verification (TdSV) Challenge 2024, which centers on analyzing and exploring novel approaches for text-dependent speaker verification.

When LLMs are Unfit Use FastFit: Fast and Effective Text Classification with Many Classes

no code yet • 18 Apr 2024

We present FastFit, a method, and a Python package design to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes.

Stance Detection on Social Media with Fine-Tuned Large Language Models

no code yet • 18 Apr 2024

This study emphasizes the potential of LLMs in stance detection and calls for more extensive research in this field.

Many-Shot In-Context Learning

no code yet • 17 Apr 2024

Finally, we demonstrate that, unlike few-shot learning, many-shot learning is effective at overriding pretraining biases and can learn high-dimensional functions with numerical inputs.