Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement

The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network. Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image. The curve estimation is specially designed, considering pixel value range, monotonicity, and differentiability. Zero-DCE is appealing in its relaxed assumption on reference images, i.e., it does not require any paired or unpaired data during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and drive the learning of the network. Our method is efficient as image enhancement can be achieved by an intuitive and simple nonlinear curve mapping. Despite its simplicity, we show that it generalizes well to diverse lighting conditions. Extensive experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods qualitatively and quantitatively. Furthermore, the potential benefits of our Zero-DCE to face detection in the dark are discussed. Code and model will be available at https://github.com/Li-Chongyi/Zero-DCE.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Low-Light Image Enhancement DICM Zero-DCE User Study Score 3.52 # 2
NIQE 4.58 # 3
BRISQUE 27.56 # 2
Color Constancy INTEL-TUT2 SRIE[8] Best 25% 3.2 # 1
Low-Light Image Enhancement LIME Zero-DCE User Study Score 3.8 # 2
NIQE 5.82 # 4
BRISQUE 20.44 # 2
Low-Light Image Enhancement MEF Zero-DCE User Study Score 3.87 # 2
NIQE 4.93 # 4
BRISQUE 17.32 # 2
Low-Light Image Enhancement NPE Zero-DCE User Study Score 3.81 # 2
NIQE 4.53 # 4
BRISQUE 20.72 # 2
Low-Light Image Enhancement VV Zero-DCE User Study Score 3.24 # 2
NIQE 4.81 # 3
BRISQUE 34.66 # 2

Methods


No methods listed for this paper. Add relevant methods here