Zero-Shot Instance Segmentation
9 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Zero-Shot Instance Segmentation models and implementationsMost implemented papers
Segment Anything
We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation.
EfficientViT: Multi-Scale Linear Attention for High-Resolution Dense Prediction
Without performance loss on Cityscapes, our EfficientViT provides up to 13. 9$\times$ and 6. 2$\times$ GPU latency reduction over SegFormer and SegNeXt, respectively.
Zero-Shot Instance Segmentation
We follow this motivation and propose a new task set named zero-shot instance segmentation (ZSI).
Segment Anything in High Quality
HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs.
SupeRGB-D: Zero-shot Instance Segmentation in Cluttered Indoor Environments
We introduce a zero-shot split for Tabletop Objects Dataset (TOD-Z) to enable this study and present a method that uses annotated objects to learn the ``objectness'' of pixels and generalize to unseen object categories in cluttered indoor environments.
Semantic-Promoted Debiasing and Background Disambiguation for Zero-Shot Instance Segmentation
It is desired to rescue novel objects from background and dominated seen categories.
Fast Segment Anything
In this paper, we propose a speed-up alternative method for this fundamental task with comparable performance.
EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything
On segment anything task such as zero-shot instance segmentation, our EfficientSAMs with SAMI-pretrained lightweight image encoders perform favorably with a significant gain (e. g., ~4 AP on COCO/LVIS) over other fast SAM models.
EfficientViT-SAM: Accelerated Segment Anything Model Without Performance Loss
For the training, we begin with the knowledge distillation from the SAM-ViT-H image encoder to EfficientViT.