Textural-Perceptual Joint Learning for No-Reference Super-Resolution Image Quality Assessment

27 May 2022  ·  Yuqing Liu, Qi Jia, Shanshe Wang, Siwei Ma, Wen Gao ·

Image super-resolution (SR) has been widely investigated in recent years. However, it is challenging to fairly estimate the performance of various SR methods, as the lack of reliable and accurate criteria for the perceptual quality. Existing metrics concentrate on the specific kind of degradation without distinguishing the visual sensitive areas, which have no ability to describe the diverse SR degeneration situations in both low-level textural and high-level perceptual information. In this paper, we focus on the textural and perceptual degradation of SR images, and design a dual stream network to jointly explore the textural and perceptual information for quality assessment, dubbed TPNet. By mimicking the human vision system (HVS) that pays more attention to the significant image areas, we develop the spatial attention to make the visual sensitive information more distinguishable and utilize feature normalization (F-Norm) to boost the network representation. Experimental results show the TPNet predicts the visual quality score more accurate than other methods and demonstrates better consistency with the human's perspective. The source code will be available at \url{http://github.com/yuqing-liu-dut/NRIQA_SR}

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here