Zero-Shot Learning
563 papers with code • 18 benchmarks • 29 datasets
Zero-shot learning (ZSL) is a model's ability to detect classes never seen during training. The condition is that the classes are not known during supervised learning.
Earlier work in zero-shot learning use attributes in a two-step approach to infer unknown classes. In the computer vision context, more recent advances learn mappings from image feature space to semantic space. Other approaches learn non-linear multimodal embeddings. In the modern NLP context, language models can be evaluated on downstream tasks without fine tuning.
Benchmark datasets for zero-shot learning include aPY, AwA, and CUB, among others.
( Image credit: Prototypical Networks for Few shot Learning in PyTorch )
Further readings:
Libraries
Use these libraries to find Zero-Shot Learning models and implementationsSubtasks
Latest papers with no code
Test-Time Adaptation with SaLIP: A Cascade of SAM and CLIP for Zero shot Medical Image Segmentation
Finally, SAM is prompted by the retrieved ROI to segment a specific organ.
Anchor-based Robust Finetuning of Vision-Language Models
Specifically, two types of anchors are elaborated in our method, including i) text-compensated anchor which uses the images from the finetune set but enriches the text supervision from a pretrained captioner, ii) image-text-pair anchor which is retrieved from the dataset similar to pretraining data of CLIP according to the downstream task, associating with the original CLIP text with rich semantics.
Condition Monitoring with Incomplete Data: An Integrated Variational Autoencoder and Distance Metric Framework
Condition monitoring of industrial systems is crucial for ensuring safety and maintenance planning, yet notable challenges arise in real-world settings due to the limited or non-existent availability of fault samples.
High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning
However, current attention-based models may overlook the transferability of visual features and the distinctiveness of attribute localization when learning regional features in images.
Bootstrapping Chest CT Image Understanding by Distilling Knowledge from X-ray Expert Models
In this paper, we explore the feasibility of leveraging language as a naturally high-quality supervision for chest CT imaging.
Towards Large Language Model driven Reference-less Translation Evaluation for English and Indian Languages
We constructed a translation evaluation task where we performed zero-shot learning, in-context example-driven learning, and fine-tuning of large language models to provide a score out of 100, where 100 represents a perfect translation and 1 represents a poor translation.
Diffusion based Zero-shot Medical Image-to-Image Translation for Cross Modality Segmentation
To leverage generative learning for zero-shot cross-modality image segmentation, we propose a novel unsupervised image translation method.
Training-Free Semantic Segmentation via LLM-Supervision
Additionally, we propose an assembly that merges the segmentation maps from the various subclass descriptors to ensure a more comprehensive representation of the different aspects in the test images.
VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation
In this work, we introduce a novel Visual Prompt-guided text-to-3D diffusion model (VP3D) that explicitly unleashes the visual appearance knowledge in 2D visual prompt to boost text-to-3D generation.
HierCode: A Lightweight Hierarchical Codebook for Zero-shot Chinese Text Recognition
Text recognition, especially for complex scripts like Chinese, faces unique challenges due to its intricate character structures and vast vocabulary.