Spatial Feature Calibration and Temporal Fusion for Effective One-stage Video Instance Segmentation

CVPR 2021  ·  Minghan Li, Shuai Li, Lida Li, Lei Zhang ·

Modern one-stage video instance segmentation networks suffer from two limitations. First, convolutional features are neither aligned with anchor boxes nor with ground-truth bounding boxes, reducing the mask sensitivity to spatial location. Second, a video is directly divided into individual frames for frame-level instance segmentation, ignoring the temporal correlation between adjacent frames. To address these issues, we propose a simple yet effective one-stage video instance segmentation framework by spatial calibration and temporal fusion, namely STMask. To ensure spatial feature calibration with ground-truth bounding boxes, we first predict regressed bounding boxes around ground-truth bounding boxes, and extract features from them for frame-level instance segmentation. To further explore temporal correlation among video frames, we aggregate a temporal fusion module to infer instance masks from each frame to its adjacent frames, which helps our framework to handle challenging videos such as motion blur, partial occlusion and unusual object-to-camera poses. Experiments on the YouTube-VIS valid set show that the proposed STMask with ResNet-50/-101 backbone obtains 33.5 % / 36.8 % mask AP, while achieving 28.6 / 23.4 FPS on video instance segmentation. The code is released online https://github.com/MinghanLi/STMask.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Video Instance Segmentation OVIS validation STMask(R101-DCN-FPN) mask AP 17.3 # 36
AP50 35.4 # 34
AP75 15.2 # 34
AR1 8.4 # 28
AR10 23.1 # 28
APso 11.1 # 7
APmo 14.7 # 8
APho 23.7 # 3
Video Instance Segmentation YouTube-VIS 2021 STMask(R101-DCN-FPN) mask AP 34.6 # 24
AP50 54.0 # 24
AP75 38.0 # 24
AR10 39.1 # 24
AR1 29.4 # 24
Video Instance Segmentation YouTube-VIS validation STMask(R101-DCN-FPN) mask AP 36.8 # 35
AP50 56.8 # 36
AP75 38.0 # 38
AR1 34.8 # 34
AR10 41.8 # 32

Methods


No methods listed for this paper. Add relevant methods here