Activating More Pixels in Image Super-Resolution Transformer

Transformer-based methods have shown impressive performance in low-level vision tasks, such as image super-resolution. However, we find that these networks can only utilize a limited spatial range of input information through attribution analysis. This implies that the potential of Transformer is still not fully exploited in existing networks. In order to activate more input pixels for better reconstruction, we propose a novel Hybrid Attention Transformer (HAT). It combines both channel attention and window-based self-attention schemes, thus making use of their complementary advantages of being able to utilize global statistics and strong local fitting capability. Moreover, to better aggregate the cross-window information, we introduce an overlapping cross-attention module to enhance the interaction between neighboring window features. In the training stage, we additionally adopt a same-task pre-training strategy to exploit the potential of the model for further improvement. Extensive experiments show the effectiveness of the proposed modules, and we further scale up the model to demonstrate that the performance of this task can be greatly improved. Our overall method significantly outperforms the state-of-the-art methods by more than 1dB. Codes and models are available at https://github.com/XPixelGroup/HAT.

PDF Abstract CVPR 2023 PDF CVPR 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Super-Resolution BSD100 - 2x upscaling HAT PSNR 32.69 # 4
SSIM 0.9060 # 3
Image Super-Resolution BSD100 - 2x upscaling HAT-L PSNR 32.74 # 2
SSIM 0.9066 # 2
Image Super-Resolution BSD100 - 3x upscaling HAT-L PSNR 29.63 # 1
SSIM 0.8191 # 1
Image Super-Resolution BSD100 - 3x upscaling HAT PSNR 29.59 # 3
SSIM 0.8177 # 2
Image Super-Resolution BSD100 - 4x upscaling HAT-L PSNR 28.09 # 2
SSIM 0.7551 # 6
Image Super-Resolution BSD100 - 4x upscaling HAT PSNR 28.05 # 4
SSIM 0.7534 # 7
Image Super-Resolution Manga109 - 2x upscaling HAT-L PSNR 41.01 # 1
SSIM 0.9831 # 1
Image Super-Resolution Manga109 - 2x upscaling HAT PSNR 40.71 # 3
SSIM 0.9819 # 2
Image Super-Resolution Manga109 - 3x upscaling HAT-L PSNR 36.02 # 1
SSIM 0.9576 # 1
Image Super-Resolution Manga109 - 3x upscaling HAT PSNR 35.84 # 3
SSIM 0.9567 # 2
Image Super-Resolution Manga109 - 4x upscaling HAT PSNR 32.87 # 4
SSIM 0.9319 # 3
Image Super-Resolution Manga109 - 4x upscaling HAT-L PSNR 33.09 # 2
SSIM 0.9335 # 2
Image Super-Resolution Set14 - 2x upscaling HAT-L PSNR 35.29 # 1
SSIM 0.9293 # 2
Image Super-Resolution Set14 - 2x upscaling HAT PSNR 35.13 # 3
SSIM 0.9282 # 3
Image Super-Resolution Set14 - 3x upscaling HAT-L PSNR 31.47 # 1
SSIM 0.8584 # 1
Image Super-Resolution Set14 - 3x upscaling HAT PSNR 31.33 # 3
SSIM 0.8576 # 2
Image Super-Resolution Set14 - 4x upscaling HAT-L PSNR 29.47 # 2
SSIM 0.8015 # 5
Image Super-Resolution Set14 - 4x upscaling HAT PSNR 29.38 # 4
SSIM 0.8001 # 7
Image Super-Resolution Set5 - 2x upscaling HAT-L PSNR 38.91 # 1
SSIM 0.9646 # 2
Image Super-Resolution Set5 - 2x upscaling HAT PSNR 38.73 # 3
SSIM 0.9637 # 3
Image Super-Resolution Set5 - 3x upscaling HAT-L PSNR 35.28 # 1
SSIM 0.9345 # 1
Image Super-Resolution Set5 - 3x upscaling HAT PSNR 35.16 # 3
SSIM 0.9335 # 2
Image Super-Resolution Set5 - 4x upscaling HAT-L PSNR 33.30 # 2
SSIM 0.9083 # 2
Image Super-Resolution Urban100 - 2x upscaling HAT PSNR 34.81 # 3
SSIM 0.9489 # 2
Image Super-Resolution Urban100 - 2x upscaling HAT-L PSNR 35.09 # 1
SSIM 0.9505 # 1
Image Super-Resolution Urban100 - 3x upscaling HAT-L PSNR 30.92 # 1
SSIM 0.8981 # 1
Image Super-Resolution Urban100 - 3x upscaling HAT PSNR 30.70 # 3
SSIM 0.8949 # 2
Image Super-Resolution Urban100 - 4x upscaling HAT-L PSNR 28.60 # 2
SSIM 0.8498 # 3
Image Super-Resolution Urban100 - 4x upscaling HAT PSNR 28.37 # 4
SSIM 0.8447 # 4

Methods