NR-IQA
27 papers with code • 1 benchmarks • 1 datasets
Most implemented papers
RankIQA: Learning from Rankings for No-reference Image Quality Assessment
Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.
No-Reference Quality Assessment of Contrast-Distorted Images using Contrast Enhancement
No-reference image quality assessment (NR-IQA) aims to measure the image quality without reference image.
MANIQA: Multi-dimension Attention Network for No-Reference Image Quality Assessment
No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception.
No-Reference Image Quality Assessment in the Spatial Domain
We propose a natural scene statistic-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model that operates in the spatial domain.
Which Has Better Visual Quality: The Clear Blue Sky or a Blurry Animal?
The proposed method, SFA, is compared with nine representative blur-specific NR-IQA methods, two general-purpose NR-IQA methods, and two extra full-reference IQA methods on Gaussian blur images (with and without Gaussian noise/JPEG compression) and realistic blur images from multiple databases, including LIVE, TID2008, TID2013, MLIVE1, MLIVE2, BID, and CLIVE.
Controllable List-wise Ranking for Universal No-reference Image Quality Assessment
First, to extend the authentically distorted image dataset, we present an imaging-heuristic approach, in which the over-underexposure is formulated as an inverse of Weber-Fechner law, and fusion strategy and probabilistic compression are adopted, to generate the degraded real-world images.
MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment
The underlying idea is to learn the meta-knowledge shared by human when evaluating the quality of images with various distortions, which can then be adapted to unknown distortions easily.
No-Reference Image Quality Assessment Based on the Fusion of Statistical and Perceptual Features
The goal of no-reference image quality assessment (NR-IQA) is to predict the quality of an image as perceived by human observers without using any pristine, reference images.
Generalizable No-Reference Image Quality Assessment via Deep Meta-learning
Based on these two task sets, an optimization-based meta-learning is proposed to learn the generalized NR-IQA model, which can be directly used to evaluate the quality of images with unseen distortions.
No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency
Specifically, we enforce self-consistency between the outputs of our quality assessment model for each image and its transformation (horizontally flipped) to utilize the rich self-supervisory information and reduce the uncertainty of the model.