No-reference Screen Content Image Quality Assessment with Unsupervised Domain Adaptation

19 Aug 2020  ·  Baoliang Chen, Haoliang Li, Hongfei Fan, Shiqi Wang ·

In this paper, we quest the capability of transferring the quality of natural scene images to the images that are not acquired by optical cameras (e.g., screen content images, SCIs), rooted in the widely accepted view that the human visual system has adapted and evolved through the perception of natural environment. Here, we develop the first unsupervised domain adaptation based no reference quality assessment method for SCIs, leveraging rich subjective ratings of the natural images (NIs). In general, it is a non-trivial task to directly transfer the quality prediction model from NIs to a new type of content (i.e., SCIs) that holds dramatically different statistical characteristics. Inspired by the transferability of pair-wise relationship, the proposed quality measure operates based on the philosophy of improving the transferability and discriminability simultaneously. In particular, we introduce three types of losses which complementarily and explicitly regularize the feature space of ranking in a progressive manner. Regarding feature discriminatory capability enhancement, we propose a center based loss to rectify the classifier and improve its prediction capability not only for source domain (NI) but also the target domain (SCI). For feature discrepancy minimization, the maximum mean discrepancy (MMD) is imposed on the extracted ranking features of NIs and SCIs. Furthermore, to further enhance the feature diversity, we introduce the correlation penalization between different feature dimensions, leading to the features with lower rank and higher diversity. Experiments show that our method can achieve higher performance on different source-target settings based on a light-weight convolution neural network. The proposed method also sheds light on learning quality assessment measures for unseen application-specific content without the cumbersome and costing subjective evaluations.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods