Blur More To Deblur Better: Multi-Blur2Deblur For Efficient Video Deblurring

23 Dec 2020  ·  Dongwon Park, Dong Un Kang, Se Young Chun ·

One of the key components for video deblurring is how to exploit neighboring frames. Recent state-of-the-art methods either used aligned adjacent frames to the center frame or propagated the information on past frames to the current frame recurrently. Here we propose multi-blur-to-deblur (MB2D), a novel concept to exploit neighboring frames for efficient video deblurring. Firstly, inspired by unsharp masking, we argue that using more blurred images with long exposures as additional inputs significantly improves performance. Secondly, we propose multi-blurring recurrent neural network (MBRNN) that can synthesize more blurred images from neighboring frames, yielding substantially improved performance with existing video deblurring methods. Lastly, we propose multi-scale deblurring with connecting recurrent feature map from MBRNN (MSDR) to achieve state-of-the-art performance on the popular GoPro and Su datasets in fast and memory efficient ways.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Ranked #5 on Deblurring on DVD (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Deblurring DVD MB2D PSNR 32.34 # 5
Image Deblurring GoPro MB2D PSNR 32.16 # 24
SSIM 0.953 # 24
Deblurring GoPro MB2D PSNR 32.16 # 27
SSIM 0.953 # 26

Methods


No methods listed for this paper. Add relevant methods here