AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation

Video frame interpolation is one of the most challenging tasks in video processing research. Recently, many studies based on deep learning have been suggested. Most of these methods focus on finding locations with useful information to estimate each output pixel using their own frame warping operations. However, many of them have Degrees of Freedom (DoF) limitations and fail to deal with the complex motions found in real world videos. To solve this problem, we propose a new warping module named Adaptive Collaboration of Flows (AdaCoF). Our method estimates both kernel weights and offset vectors for each target pixel to synthesize the output frame. AdaCoF is one of the most generalized warping modules compared to other approaches, and covers most of them as special cases of it. Therefore, it can deal with a significantly wide domain of complex motions. To further improve our framework and synthesize more realistic outputs, we introduce dual-frame adversarial loss which is applicable only to video frame interpolation tasks. The experimental results show that our method outperforms the state-of-the-art methods for both fixed training set environments and the Middlebury benchmark.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Frame Interpolation MSU Video Frame Interpolation AdaCoF_f PSNR 24.99 # 20
SSIM 0.903 # 18
VMAF 60.19 # 19
LPIPS 0.058 # 16
MS-SSIM 0.913 # 18
Video Frame Interpolation MSU Video Frame Interpolation AdaCoF PSNR 23.17 # 24
SSIM 0.891 # 21
VMAF 58.29 # 22
LPIPS 0.692 # 23
MS-SSIM 0.883 # 23
Video Frame Interpolation X4K1000FPS AdaCoF_f PSNR 25.81 # 14
SSIM 0.772 # 14
tOF 6.42 # 5
Video Frame Interpolation X4K1000FPS AdaCoF PSNR 23.90 # 17
SSIM 0.727 # 16
tOF 6.89 # 8

Methods


No methods listed for this paper. Add relevant methods here