Fast and Accurate Single Image Super-Resolution via Information Distillation Network

CVPR 2018  ·  Zheng Hui, Xiumei Wang, Xinbo Gao ·

Recently, deep convolutional neural networks (CNNs) have been demonstrated remarkable progress on single image super-resolution. However, as the depth and width of the networks increase, CNN-based super-resolution methods have been faced with the challenges of computational complexity and memory consumption in practice. In order to solve the above questions, we propose a deep but compact convolutional network to directly reconstruct the high resolution image from the original low resolution image. In general, the proposed model consists of three parts, which are feature extraction block, stacked information distillation blocks and reconstruction block respectively. By combining an enhancement unit with a compression unit into a distillation block, the local long and short-path features can be effectively extracted. Specifically, the proposed enhancement unit mixes together two different types of features and the compression unit distills more useful information for the sequential blocks. In addition, the proposed network has the advantage of fast execution due to the comparatively few numbers of filters per layer and the use of group convolution. Experimental results demonstrate that the proposed method is superior to the state-of-the-art methods, especially in terms of time performance.

PDF Abstract CVPR 2018 PDF CVPR 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution BSD100 - 4x upscaling IDN PSNR 27.41 # 36
SSIM 0.7297 # 37
Image Super-Resolution IXI IDN SSIM for 2x T2w 0.9846 # 4
PSNR 2x T2w 39.09 # 4
SSIM 4x T2w 0.9312 # 5
PSNR 4x T2w 31.37 # 5
Image Super-Resolution Set14 - 4x upscaling IDN PSNR 28.25 # 56
SSIM 0.773 # 55
Image Super-Resolution Urban100 - 4x upscaling IDN PSNR 25.41 # 42
SSIM 0.7632 # 38

Methods


No methods listed for this paper. Add relevant methods here