Deficiency-Aware Masked Transformer for Video Inpainting

17 Jul 2023  ·  Yongsheng Yu, Heng Fan, Libo Zhang ·

Recent video inpainting methods have made remarkable progress by utilizing explicit guidance, such as optical flow, to propagate cross-frame pixels. However, there are cases where cross-frame recurrence of the masked video is not available, resulting in a deficiency. In such situation, instead of borrowing pixels from other frames, the focus of the model shifts towards addressing the inverse problem. In this paper, we introduce a dual-modality-compatible inpainting framework called Deficiency-aware Masked Transformer (DMT), which offers three key advantages. Firstly, we pretrain a image inpainting model DMT_img serve as a prior for distilling the video model DMT_vid, thereby benefiting the hallucination of deficiency cases. Secondly, the self-attention module selectively incorporates spatiotemporal tokens to accelerate inference and remove noise signals. Thirdly, a simple yet effective Receptive Field Contextualizer is integrated into DMT, further improving performance. Extensive experiments conducted on YouTube-VOS and DAVIS datasets demonstrate that DMT_vid significantly outperforms previous solutions. The code and video demonstrations can be found at github.com/yeates/DMT.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Inpainting DAVIS DMT PSNR 33.82 # 1
SSIM 0.976 # 1
VFID 0.104 # 1
Ewarp - # 9
Video Inpainting YouTube-VOS 2018 DMT PSNR 34.27 # 2
SSIM 0.9730 # 2
VFID 0.044 # 2
Ewarp - # 9

Methods