This website presents work about Color Constancy. It contains an explanation of the topic, as well as an overview of various approaches to obtain it. Further, results of several methods applied to some of the publicly available data sets are provided, as well as reproduction errors (as proposed by Finlayson and Zakizadeh at BMVC 2014). Finally, links to various publicly available data sets and source-codes are provided. If you have any questions, comments, remarks or additions to this website, please write an e-mail to the contact person of this website. Feel free to refer to this website in your publications if you use any of the information that you recovered from this place (like the pre-computed results of several algorithms on various data sets). Drop me a line if you would like to have your publication mentioned on this page.
Ershov et al. proposes the Cube++ Illumination Dataset. Cube++ is an illumination estimation dataset that continues on the Cube+ dataset. It includes 4890 images of different scenes with known illumination colors as well as with additional semantic data that can further make the learning process more accurate. Due
to the usage of the SpyderCube color target, for every image there are two ground-truth illumination records covering different directions. Cube++ was specifically designed to tackle issues that are present in some other datasets such as too few images, inappropriate image quality, lack of scene diversity, absence of version tracking, violation of various assumptions, GDPR regulation violation, lack of additional shooting procedure info, etc. The rich content and illumination variety has been achieved by including images taken in Austria, Croatia, Czechia, Georgia, Germany, Romania, Russia, Slovenia, Turkey, and Ukraine.
You can find the paper here and the dataset download link here.
Afifi et al. proposes a learning-based colour
constancy method Cross-Camera Convolutional Color Constancy (C5), which is trained
on images from multiple cameras. The proposed method accurately estimates a
scene’s illuminant color from raw images captured by a new camera previously
unseen during training. C5 is a hypernetwork-like extension of the
convolutional color constancy (CCC)approach: C5 learns to generate the weights
of a CCC model that is then evaluated on the input image, with the CCC weights
dynamically adapted to different input content. Unlike prior cross-camera color
constancy models, which are usually designed to be agnostic to the spectral properties
of test-set images from unobserved cameras, C5 approaches this problem through
the lens of transductive inference: additional unlabelled images are provided
as provided to the model at test time, which allows the model to calibrate itself
to the spectral properties of the test-set camera during inference.
You can find the paper here with the public code repository
here.
Qian et al. proposes a neural network-based solution for three different tracks of 2nd International Illumination Estimation Challenge. They build on pre-trained Squeeze-Net backbone, differential 2D chroma histogram layer and a shallow MLP utilizing Exif information. By combining semantic feature, color feature and Exif metadata, the resulting method – SDE-AWB – obtains 1st place in both indoor and two-illuminant tracks and 2nd place in general track
The paper can be accessed here.
Qian et al introduces a new Temporal Color
Constancy (CC) benchmark. The conventional approach is to use a single frame –
shot frame – to estimate the scene illumination color. In temporal CC, multiple
frames from the view finder sequence are used to estimate the color. However,
there are no realistic large scale temporal color constancy datasets for method
evaluation. The benchmark comprises of (1) 600 real-world sequences recorded
with a high-resolution mobile phone camera, (2) a fixed train-test split which ensures
consistent evaluation, and (3) a baseline method which achieves high accuracy
in the new benchmark and the dataset used in previous works.
Find the paper here with the public code here. The dataset can be obtained from here.
Yu et al. introduce a novel algorithm by Cascading
Convolutional Color Constancy (in short, C4) to improve the robustness of regression
learning and achieve stable generalization capability across datasets
(different cameras and scenes) in a unique framework. The proposed C4 method ensembles a series of
dependent illumination hypotheses from each cascade stage via introducing a
weighted multiply-accumulate loss function, which can inherently capture
different modes of illuminations and explicitly enforce coarse-to-fine network
optimization.
The paper can be accessed here, with the publicly available code present here.
Qian et al. proposes a novel grayness index for finding gray pixels and demonstrate its effectiveness and efficiency in illumination estimation. The grayness index, GI in short, is derived using the Dichromatic Reflection Model and is learning-free. GI allows to estimate one or multiple illumination sources in color-biased images. GI is simple and fast, written in a few dozen lines of code, processing a 1080p image in?0.4seconds with a non-optimized Matlab code.
Access the paper here. The code is freely available here.
Hernandez-Juarez et al. propose a Bayesian framework to improve the generalization performance to new devices via a multi-hypothesis strategy (CVPR2020). Firstly, a set of candidate scene illuminants are selected in a data-driven fashion and applied to a target image to generate a set of corrected images. Secondly, for each corrected image, the likelihood of the light source being achromatic is estimated using a camera-agnostic CNN. Finally, the proposed method explicitly learns a final illumination estimate from the generated posterior probability distribution.
You can access the paper using this link and the code is also publicly available.
Afifi et. al. propose a method to improve the accuracy of statistical-based illumination estimation methods by applying an as-projective-as-possible bias-correction function (JOSA A, 2019). You can access the paper using this link and the code is also publicly available.
In another work, Afifi et. al. introduce a CNN-based novel sensor-independent illuminant estimation framework to learn a sensor-independent working space that can be used to canonicalize the RGB values of any arbitrary camera sensor (BMVC’19 – oral). You can access the paper using this link and the code is also publicly available.
A new color constancy paper published in CVPR2019 “When Color Constancy Goes Wrong: Correcting Improperly White-Balanced Images” by Afifi et al. has made their dataset publicly available. It contains over 65,000 pairs of incorrectly white-balanced images and their corresponding correctly white-balanced images.
Check the project page or directly access the dataset. The code is also publicly available.