Zero-Shot Learning

557 papers with code • 18 benchmarks • 29 datasets

Zero-shot learning (ZSL) is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.

Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.

Benchmark datasets for zero-shot learning include aPY, AwA, and CUB, among others.

( Image credit: Prototypical Networks for Few shot Learning in PyTorch )

Further readings:

Libraries

Use these libraries to find Zero-Shot Learning models and implementations

Latest papers with no code

OTTER: Improving Zero-Shot Classification via Optimal Transport

no code yet • 12 Apr 2024

Popular zero-shot models suffer due to artifacts inherited from pretraining.

Connecting NeRFs, Images, and Text

no code yet • 11 Apr 2024

Neural Radiance Fields (NeRFs) have emerged as a standard framework for representing 3D scenes and objects, introducing a novel data type for information exchange and storage.

Progressive Semantic-Guided Vision Transformer for Zero-Shot Learning

no code yet • 11 Apr 2024

ZSLViT mainly considers two properties in the whole network: i) discover the semantic-related visual representations explicitly, and ii) discard the semantic-unrelated visual information.

Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation

no code yet • 9 Apr 2024

Finally, SAM is prompted by the retrieved ROI to segment a specific organ.

Anchor-based Robust Finetuning of Vision-Language Models

no code yet • 9 Apr 2024

Specifically, two types of anchors are elaborated in our method, including i) text-compensated anchor which uses the images from the finetune set but enriches the text supervision from a pretrained captioner, ii) image-text-pair anchor which is retrieved from the dataset similar to pretraining data of CLIP according to the downstream task, associating with the original CLIP text with rich semantics.

Condition Monitoring with Incomplete Data: An Integrated Variational Autoencoder and Distance Metric Framework

no code yet • 8 Apr 2024

Condition monitoring of industrial systems is crucial for ensuring safety and maintenance planning, yet notable challenges arise in real-world settings due to the limited or non-existent availability of fault samples.

High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning

no code yet • 7 Apr 2024

However, current attention-based models may overlook the transferability of visual features and the distinctiveness of attribute localization when learning regional features in images.

Bootstrapping Chest CT Image Understanding by Distilling Knowledge from X-ray Expert Models

no code yet • 7 Apr 2024

In this paper, we explore the feasibility of leveraging language as a naturally high-quality supervision for chest CT imaging.

Towards Large Language Model driven Reference-less Translation Evaluation for English and Indian Languages

no code yet • 3 Apr 2024

We constructed a translation evaluation task where we performed zero-shot learning, in-context example-driven learning, and fine-tuning of large language models to provide a score out of 100, where 100 represents a perfect translation and 1 represents a poor translation.

Diffusion based Zero-shot Medical Image-to-Image Translation for Cross Modality Segmentation

no code yet • 1 Apr 2024

To leverage generative learning for zero-shot cross-modality image segmentation, we propose a novel unsupervised image translation method.