Zero-Shot Semantic Segmentation
17 papers with code • 4 benchmarks • 3 datasets
Most implemented papers
Open-Vocabulary Semantic Segmentation with Decoupled One-Pass Network
Recently, the open-vocabulary semantic segmentation problem has attracted increasing attention and the best performing methods are based on two-stream networks: one stream for proposal mask generation and the other for segment classification using a pretrained visual-language model.
Delving into Shape-aware Zero-shot Semantic Segmentation
Thanks to the impressive progress of large-scale vision-language pretraining, recent recognition models can classify arbitrary objects in a zero-shot and open-set manner, with a surprisingly high accuracy.
What a MESS: Multi-Domain Evaluation of Zero-Shot Semantic Segmentation
To address this problem, zero-shot semantic segmentation makes use of large self-supervised vision-language models, allowing zero-shot transfer to unseen classes.
CLIP-DIY: CLIP Dense Inference Yields Open-Vocabulary Semantic Segmentation For-Free
The emergence of CLIP has opened the way for open-world image perception.
An easy zero-shot learning combination: Texture Sensitive Semantic Segmentation IceHrNet and Advanced Style Transfer Learning Strategy
First, we built a river ice semantic segmentation dataset IPC_RI_SEG using a fixed camera and covering the entire ice melting process of the river.
SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference
Specifically, we replace the traditional self-attention block of CLIP vision encoder's last layer by our CSA module and reuse its pretrained projection matrices of query, key, and value, leading to a training-free adaptation approach for CLIP's zero-shot semantic segmentation.
Spectral Prompt Tuning:Unveiling Unseen Classes for Zero-Shot Semantic Segmentation
Recently, CLIP has found practical utility in the domain of pixel-level zero-shot segmentation tasks.