Cascading Convolutional Color Constancy

24 Dec 2019  ·  Huanglin Yu, Ke Chen, Kaiqi Wang, Yanlin Qian, Zhao-Xiang Zhang, Kui Jia ·

Regressing the illumination of a scene from the representations of object appearances is popularly adopted in computational color constancy. However, it's still challenging due to intrinsic appearance and label ambiguities caused by unknown illuminants, diverse reflection property of materials and extrinsic imaging factors (such as different camera sensors). In this paper, we introduce a novel algorithm by Cascading Convolutional Color Constancy (in short, C4) to improve robustness of regression learning and achieve stable generalization capability across datasets (different cameras and scenes) in a unique framework. The proposed C4 method ensembles a series of dependent illumination hypotheses from each cascade stage via introducing a weighted multiply-accumulate loss function, which can inherently capture different modes of illuminations and explicitly enforce coarse-to-fine network optimization. Experimental results on the public Color Checker and NUS 8-Camera benchmarks demonstrate superior performance of the proposed algorithm in comparison with the state-of-the-art methods, especially for more difficult scenes.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here