Image Enhancement
309 papers with code • 6 benchmarks • 16 datasets
Image Enhancement is basically improving the interpretability or perception of information in images for human viewers and providing ‘better’ input for other automated image processing techniques. The principal objective of Image Enhancement is to modify attributes of an image to make it more suitable for a given task and a specific observer.
Source: A Comprehensive Review of Image Enhancement Techniques
Libraries
Use these libraries to find Image Enhancement models and implementationsDatasets
Subtasks
Latest papers
AdaIR: Adaptive All-in-One Image Restoration via Frequency Mining and Modulation
Our approach is motivated by the observation that different degradation types impact the image content on different frequency subbands, thereby requiring different treatments for each restoration task.
End-To-End Underwater Video Enhancement: Dataset and Model
To fill this gap, we construct the Synthetic Underwater Video Enhancement (SUVE) dataset, comprising 840 diverse underwater-style videos paired with ground-truth reference videos.
FogGuard: guarding YOLO against fog using perceptual loss
In this paper, we present a novel fog-aware object detection network called FogGuard, designed to address the challenges posed by foggy weather conditions.
7T MRI Synthesization from 3T Acquisitions
We demonstrate that the V-Net based model has superior performance in enhancing both single-site and multi-site MRI datasets compared to the existing benchmark model.
Learning A Physical-aware Diffusion Model Based on Transformer for Underwater Image Enhancement
PA-Diff consists of Physics Prior Generation (PPG) Branch, Implicit Neural Reconstruction (INR) Branch, and Physics-aware Diffusion Transformer (PDT) Branch.
Misalignment-Robust Frequency Distribution Loss for Image Transformation
This paper aims to address a common challenge in deep learning-based image transformation methods, such as image enhancement and super-resolution, which heavily rely on precisely aligned paired datasets with pixel-level alignments.
You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement
Further, we design a novel Color and Intensity Decoupling Network (CIDNet) with two branches dedicated to processing the decoupled image brightness and color in the HVI space.
Troublemaker Learning for Low-Light Image Enhancement
Second, the predicting model (PM) enhances the brightness of pseudo low-light images.
Visual Text Meets Low-level Vision: A Comprehensive Survey on Visual Text Processing
Our aim is to establish this survey as a fundamental resource, fostering continued exploration and innovation in the dynamic area of visual text processing.
InstructIR: High-Quality Image Restoration Following Human Instructions
All-In-One image restoration models can effectively restore images from various types and levels of degradation using degradation-specific information as prompts to guide the restoration model.