Zero-Shot Video Object Segmentation

14 papers with code • 0 benchmarks • 0 datasets

Zero-shot video object segmentation (VOS) is a challenging task that consists on segmenting and tracking multiple moving objects within a video fully automatically, without any manual initialization.

Most implemented papers

RVOS: End-to-End Recurrent Network for Video Object Segmentation

imatge-upc/rvos CVPR 2019

Multiple object video object segmentation is a challenging task, specially for the zero-shot case, when no object mask is given at the initial frame and the model has to find the objects to be segmented along the sequence.

Zero-Shot Video Object Segmentation via Attentive Graph Neural Networks

carrierlxk/AGNN ICCV 2019

Through parametric message passing, AGNN is able to efficiently capture and mine much richer and higher-order relations between video frames, thus enabling a more complete understanding of video content and more accurate foreground estimation.

Motion-Attentive Transition for Zero-Shot Video Object Segmentation

tfzhou/MATNet 9 Mar 2020

In this paper, we present a novel Motion-Attentive Transition Network (MATNet) for zero-shot video object segmentation, which provides a new way of leveraging motion information to reinforce spatio-temporal object representation.

ALBA : Reinforcement Learning for Video Object Segmentation

kini5gowda/ALBA-RL-for-VOS 26 May 2020

We treat this as a grouping problem by exploiting object proposals and making a joint inference about grouping over both space and time.

Video Object Segmentation with Episodic Graph Memory Networks

carrierlxk/GraphMemVOS ECCV 2020

How to make a segmentation model efficiently adapt to a specific video and to online target appearance variations are fundamentally crucial issues in the field of video object segmentation.

MATNet: Motion-Attentive Transition Network for Zero-Shot Video Object Segmentation

tfzhou/MATNet IEEE Transactions on Image Processing 2020

To further demonstrate the generalization ability of our spatiotemporal learning framework, we extend MATNet to another relevant task: dynamic visual attention prediction (DVAP).

Learning Motion-Appearance Co-Attention for Zero-Shot Video Object Segmentation

isyangshu/amc-net ICCV 2021

How to make the appearance and motion information interact effectively to accommodate complex scenarios is a fundamental issue in flow-based zero-shot video object segmentation.

Multi-Source Fusion and Automatic Predictor Selection for Zero-Shot Video Object Segmentation

xiaoqi-zhao-dlut/multi-source-aps-zvos 11 Aug 2021

In this paper, we propose a novel multi-source fusion network for zero-shot video object segmentation.

Adaptive Multi-source Predictor for Zero-shot Video Object Segmentation

xiaoqi-zhao-dlut/multi-source-aps-zvos 18 Mar 2023

In the static object predictor, the RGB source is converted to depth and static saliency sources, simultaneously.

Co-attention Propagation Network for Zero-Shot Video Object Segmentation

nust-machine-intelligence-laboratory/hcpn 8 Apr 2023

Zero-shot video object segmentation (ZS-VOS) aims to segment foreground objects in a video sequence without prior knowledge of these objects.