Exploiting Style and Attention in Real-World Super-Resolution

21 Dec 2019  ·  Xin Ma, Yi Li, Huaibo Huang, Mandi Luo, Ran He ·

Real-world image super-resolution (SR) is a challenging image translation problem. Low-resolution (LR) images are often generated by various unknown transformations rather than by applying simple bilinear down-sampling on high-resolution (HR) images. To address this issue, this paper proposes a novel pipeline which exploits style and attention mechanism in real-world SR. Our pipeline consists of a style Variational Autoencoder (styleVAE) and a SR network incorporated with attention mechanism. To get real-world-like low-quality images paired with the HR images, we design the styleVAE to transfer the complex nuisance factors in real-world LR images to the generated LR images. We also use mutual information estimation (MI) to get better style information. For our SR network, we firstly propose a global attention residual block to learn long-range dependencies in images. Then another local attention residual block is proposed to enforce the attention of SR network moving to local areas of images in which texture detail will be filled. It is worth noticing that styleVAE can be presented in a plug-and-play manner and thus can help to improve the generalization and robustness of our SR method as well as other SR methods. Extensive experiments demonstrate that our method surpasses the state-of-the-art work, both quantitatively and qualitatively.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods