EDVR: Video Restoration with Enhanced Deformable Convolutional Networks

7 May 2019  ·  Xintao Wang, Kelvin C. K. Chan, Ke Yu, Chao Dong, Chen Change Loy ·

Video restoration tasks, including super-resolution, deblurring, etc, are drawing increasing attention in the computer vision community. A challenging benchmark named REDS is released in the NTIRE19 Challenge. This new benchmark challenges existing methods from two aspects: (1) how to align multiple frames given large motions, and (2) how to effectively fuse different frames with diverse motion and blur. In this work, we propose a novel Video Restoration framework with Enhanced Deformable networks, termed EDVR, to address these challenges. First, to handle large motions, we devise a Pyramid, Cascading and Deformable (PCD) alignment module, in which frame alignment is done at the feature level using deformable convolutions in a coarse-to-fine manner. Second, we propose a Temporal and Spatial Attention (TSA) fusion module, in which attention is applied both temporally and spatially, so as to emphasize important features for subsequent restoration. Thanks to these modules, our EDVR wins the champions and outperforms the second place by a large margin in all four tracks in the NTIRE19 video restoration and enhancement challenges. EDVR also demonstrates superior performance to state-of-the-art published methods on video super-resolution and deblurring. The code is available at https://github.com/xinntao/EDVR.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Enhancement MFQE v2 EDVR Incremental PSNR 0.75 # 4
Deblurring REDS EDVR_Deblur Average PSNR 34.80 # 2
Video Super-Resolution Vid4 - 4x upscaling EDVR PSNR 27.35 # 7
SSIM 0.8264 # 8
Video Super-Resolution Vid4 - 4x upscaling - BD degradation EDVR PSNR 27.85 # 10
SSIM 0.8503 # 10

Methods


No methods listed for this paper. Add relevant methods here