Image Super-Resolution via RL-CSC: When Residual Learning Meets Convolutional Sparse Coding

31 Dec 2018  ·  Menglei Zhang, Zhou Liu, Lei Yu ·

We propose a simple yet effective model for Single Image Super-Resolution (SISR), by combining the merits of Residual Learning and Convolutional Sparse Coding (RL-CSC). Our model is inspired by the Learned Iterative Shrinkage-Threshold Algorithm (LISTA). We extend LISTA to its convolutional version and build the main part of our model by strictly following the convolutional form, which improves the network's interpretability. Specifically, the convolutional sparse codings of input feature maps are learned in a recursive manner, and high-frequency information can be recovered from these CSCs. More importantly, residual learning is applied to alleviate the training difficulty when the network goes deeper. Extensive experiments on benchmark datasets demonstrate the effectiveness of our method. RL-CSC (30 layers) outperforms several recent state-of-the-arts, e.g., DRRN (52 layers) and MemNet (80 layers) in both accuracy and visual qualities. Codes and more results are available at https://github.com/axzml/RL-CSC.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution BSD100 - 4x upscaling RL-CSC PSNR 27.44 # 33
SSIM 0.7302 # 36
Image Super-Resolution Set14 - 4x upscaling RL-CSC PSNR 28.29 # 54
SSIM 0.7741 # 53
Image Super-Resolution Urban100 - 4x upscaling RL-CSC PSNR 25.59 # 38
SSIM 0.7680 # 37

Methods


No methods listed for this paper. Add relevant methods here