Zero-Shot Video Object Segmentation
14 papers with code • 0 benchmarks • 0 datasets
Zero-shot video object segmentation (VOS) is a challenging task that consists on segmenting and tracking multiple moving objects within a video fully automatically, without any manual initialization.
Benchmarks
These leaderboards are used to track progress in Zero-Shot Video Object Segmentation
Most implemented papers
RVOS: End-to-End Recurrent Network for Video Object Segmentation
Multiple object video object segmentation is a challenging task, specially for the zero-shot case, when no object mask is given at the initial frame and the model has to find the objects to be segmented along the sequence.
Zero-Shot Video Object Segmentation via Attentive Graph Neural Networks
Through parametric message passing, AGNN is able to efficiently capture and mine much richer and higher-order relations between video frames, thus enabling a more complete understanding of video content and more accurate foreground estimation.
Motion-Attentive Transition for Zero-Shot Video Object Segmentation
In this paper, we present a novel Motion-Attentive Transition Network (MATNet) for zero-shot video object segmentation, which provides a new way of leveraging motion information to reinforce spatio-temporal object representation.
ALBA : Reinforcement Learning for Video Object Segmentation
We treat this as a grouping problem by exploiting object proposals and making a joint inference about grouping over both space and time.
Video Object Segmentation with Episodic Graph Memory Networks
How to make a segmentation model efficiently adapt to a specific video and to online target appearance variations are fundamentally crucial issues in the field of video object segmentation.
MATNet: Motion-Attentive Transition Network for Zero-Shot Video Object Segmentation
To further demonstrate the generalization ability of our spatiotemporal learning framework, we extend MATNet to another relevant task: dynamic visual attention prediction (DVAP).
Learning Motion-Appearance Co-Attention for Zero-Shot Video Object Segmentation
How to make the appearance and motion information interact effectively to accommodate complex scenarios is a fundamental issue in flow-based zero-shot video object segmentation.
Multi-Source Fusion and Automatic Predictor Selection for Zero-Shot Video Object Segmentation
In this paper, we propose a novel multi-source fusion network for zero-shot video object segmentation.
Adaptive Multi-source Predictor for Zero-shot Video Object Segmentation
In the static object predictor, the RGB source is converted to depth and static saliency sources, simultaneously.
Co-attention Propagation Network for Zero-Shot Video Object Segmentation
Zero-shot video object segmentation (ZS-VOS) aims to segment foreground objects in a video sequence without prior knowledge of these objects.