See More, Know More: Unsupervised Video Object Segmentation with Co-Attention Siamese Networks

We introduce a novel network, called CO-attention Siamese Network (COSNet), to address the unsupervised video object segmentation task from a holistic view. We emphasize the importance of inherent correlation among video frames and incorporate a global co-attention mechanism to improve further the state-of-the-art deep learning based solutions that primarily focus on learning discriminative foreground representations over appearance and motion in short-term temporal segments. The co-attention layers in our network provide efficient and competent stages for capturing global correlations and scene context by jointly computing and appending co-attention responses into a joint feature space. We train COSNet with pairs of video frames, which naturally augments training data and allows increased learning capacity. During the segmentation stage, the co-attention model encodes useful information by processing multiple reference frames together, which is leveraged to infer the frequently reappearing and salient foreground objects better. We propose a unified and end-to-end trainable framework where different co-attention variants can be derived for mining the rich context within videos. Our extensive experiments over three large benchmarks manifest that COSNet outperforms the current alternatives by a large margin.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Video Object Segmentation DAVIS 2016 val COSNet G 80.0 # 20
J 80.5 # 21
F 79.4 # 19
Unsupervised Video Object Segmentation FBMS test COSNet J 75.6 # 10
Video Polyp Segmentation SUN-SEG-Easy (Unseen) COSNet S measure 0.654 # 11
mean E-measure 0.600 # 11
weighted F-measure 0.431 # 11
mean F-measure 0.496 # 11
Dice 0.596 # 10
Sensitivity 0.359 # 13
Video Polyp Segmentation SUN-SEG-Hard (Unseen) COSNet S-Measure 0.670 # 11
mean E-measure 0.627 # 11
weighted F-measure 0.443 # 10
mean F-measure 0.506 # 11
Dice 0.606 # 8
Sensitivity 0.380 # 13
Unsupervised Video Object Segmentation YouTube-Objects COSNet J 70.5 # 6

Methods