YACCD2 (Yet Another Color Constancy Database Updated)

After almost ten year, we have decided to update the original YACCD database keeping its original features and introducing new features in order to make it more suitable to test a wider variety of visual and image processing algorithms like e.g. models of human color constancy, computational color constancy, human vision models, HDR tone rendering, intrinsic images and other computer vision algorithms.
In particular, in the new database, called YACCD2, we have introduced or are going to introduce in the near future, the following new features:
  • -  Higher resolution of the images
  • -  More recent set of illuminants
  • -  Acquisition with newer SLR cameras
  • -  Multiple exposures for HDR imaging
  • -  Images are available in both JPEG and RAW formats
  • -  Stereo pairs of the scene
  • -  Each image in the database comes with information on the reflectance data, acquired at the moment on the test scenes

The YACCD2 database consists of two sets of images: the images in the first set comes from a low dynamic range (LDR) scene, while for the second set a high dynamic range (HDR) scene has been set.

From the acquisition point of view, the two datasets share the following common features:

  • Five light sources have been selected: a fluorescent warm, a fluorescent cold, a halogen, a fluorescent tube with a strong yellow cast, a set of blue LEDs disposed on a circular ring. For the LDR scene all five light sources have been used. For HDR scene we considered a subset of three light sources: fluorescent warm, fluorescent cold and halogen. 
  • Following the approach of the original YACCD, we have acquired the images using two different backgrounds based on a white noise pattern.
  • We have acquired images using two different subjects: a standard 24-patches Macbeth Color Checker, and an object made with different colored toy building bricks. Reflectance properties of the objects have been measured and provided with the database.
  • We have provided the data from the measurements taken with a colorimeter (Konica Minolta CA-2000) and the exposure values taken with a spot-meter (Konica Minolta Spot Meter F). 

The two LDR and HDR datasets differ in some aspects regarding the shooting setup and illumination: in the LDR scenes we have used a lighting boot, and we have also introduced in half of the images a shadow. 

For LDR we provide for download a 3 shot bracketing set, while for HDR we provide a 7 shot bracketing set, that can be used to build a HDR image with the preferred tool/algorithm.

Considering all the parameters and possible combinations, the LDR dataset consists so far of 120 images (5 light sources, 2 subjects, 2 backgrounds, 2 shadow conditions, 3 different exposures), while the HDR dataset consists of 84 images (3 light sources, 2 subjects, 2 backgrounds, 7 different exposures).

All the images (of resolution 5184x3456 pixels) are provided in both JPG and CR2 (CanonTM RAW files) formats. 

Download files: 

Reflectance data of MacBeth Color Checker and Toy Shuttle (64 Kb).

Spectral Power Distributions of illuminants considered in the database (21 Kb).

LDR scenes

MacBeth Color Checker (MCC):       Toy Shuttle: 
Background 1, RAW (773 Mb)   Background 1, RAW   (869 Mb)
Background 1, JPEG (215 Mb)   Background 1, JPEG (267 Mb)
Background 2, RAW (717 Mb)   Background 2, RAW (680 Mb)
Background 2, JPEG (217 Mb)   Background 2, JPEG (168 Mb)

 

 




 

HDR scenes

MacBeth Color Checker (MCC):       Toy Shuttle: 
Background 1, RAW (414 Mb)   Background 1, RAW   (462 Mb)
Background 1, JPEG (82 Mb)   Background 1, JPEG (117 Mb)
Background 2, RAW (399 Mb)   Background 2, RAW (433 Mb)
Background 2, JPEG (75 Mb)   Background 2, JPEG (102 Mb)

 

 

Other IDBs for color constancy in the web:

Lab/Dept Author Organization  
The Computational Vision Lab Brian Funt et al Simon Fraser University, CANADA   LINK  
Ucentric Systems John A. Watlington Maynard, MA USA   LINK  
Department of Psychology David H. Brainard University of Pennsylvania, USA   LINK  
Machine Vision Group Matti Pietikäinen University of Oulu, FINLAND   LINK  
Harvard School of Eng.and Applied Sciences    A.Chakrabarti,K. Hirakawa, and T. Zickler   Harward University, USA   LINK  
Vision Group Peter V. Gehler Microsoft Research, USA   LINK  
Centre de Visió per Computador Parraga, C. A. Universitat Autònoma de Barcelona, SPAIN     LINK  
Department of Psychology Kitaoka, A.

Ritsumeikan University, JAPAN

  LINK