This page describes two hyperspectral images, which we refer to as the BearFruit1B and BearFruit1Y images respectively. They were taken in a special laboratory room [5, 6] where the illumination was under computer control. The images are of the same set of surfaces under two different illuminants. One illuminant was bluish, the other yellowish. You can see a plot of the bluish and yellowish illuminant spectra by clicking here.
These images were acquired by E. A. Harding and myself as part of ongoing research on color image processing. You are welcome to use these images, but you should provide a citation [0], just as you would when you use data from any scientific publication. Other relevant references are provided here. (For example, Vora et. al [1] describe the initial camera design and a characterization of its performance for digital camera simulation.)
NOTE: For most purposes, the Bear and Fruit Gray images are superior to these. The dark correction and image alignment procedures used here were improved for the Bear and Fruit Gray images. Also note that these images were not rendered to RGB using the same calibration data as the Bear and Fruit Gray images.
Each hyperspectral image consists of 31 monochromatic image planes, corresponding to wavelengths between 400 and 700 nm (inclusive) in 10 nm steps.
Each monochromatic images is stored in a raw format as follows.
For a listing of a MATLAB function that will read the individual images into a matrix variable, click here. (This function most recently tested on a Mac, 2020a. See the writeup for reading the BearFruitGray images for a little more on system compatibilty.)
The monochromatic images in the archive are named according to the convention 400, 410, .... 700. These individual images correspond to 400 nm, 410 nm, ..., 700 nm respectively. In the archive, the monochromatic images are stored in the subdirectory ALIGN.
The individual monochromatic images were taken using different f-stops and different exposure durations. To produce a calibrated hyperspectral image, each individual image must be scaled by a calibration factor. The calibration factors were determined by comparing the image data at a reference location to direct radiometric measurements (PhotoResearch PR-650) of the light coming from this location. For the two images here, the reference location was the white paper in the upper right of the images. The white paper was a Munsell matte N 9.5/ paper. To obtain an estimate of the illuminant incident at the reference location, multiply the reference spectrum by 1.12. The calibration factors and reference spectrum are provided as a Macintosh text file calibration.mtxt. This file should be easy to read on other platforms as well.
To view a hyperspectral image, it is useful to reduce it to an RGB format. The RGB images are useful for obtaining a sense of the appearance of the image, but the values should not be used for calculations - they are specific to a particular monitor and rendering procedures. For calculations, the underlying spectral data should be used. For the images shown, the rendering to RGB was done in two basic steps. First, we used the hyperspectral images together with the CIE 1931 color matching functions to calculate the XYZ tristimulus values at each image location. We then used monitor calibration information (for an Apple 20" color monitor that was available in our lab) to compute RGB image values that produce an image that is pixel-by-pixel metameric to the XYZ image that was derived from the hyperspectral image. Some of the regions of the hyperspectral image were out of gamut -- producing the appropriate monitor metamer would require negative power on one or more guns, or more light than the monitor could produce. These were brought into gamut using some combination of scaling and clipping before gamma correction. The particular monitor data used for gamma correction has the feature that there is a relatively high-threshold before variation in input values has any effect on the light output. Because of this and the gamut mapping, the rendered images on this page can appear washed out. We have since developed procedures for producing nicer looking RGB images. Please contact David Brainard if you are interested knowing more about this particular issue.
Each image archive contains the raw images, the file calibration.mtxt, and a TIFF file containing the rendered RGB image.
The archives are in compressed UNIX tar format. For Macintosh users, this format is easily unpacked using the Stuffit application.
1. The images were acquired by placing interference filters between the back of the lens and the CCD chip. This arrangement minimizes spatial artifacts in the images. But it has the effect of producing changes in image location and scale from filter to filter. (In addition, we have to refocus the camera for every filter.) To correct for this, we used a global affine motion estimation algorithm to compute transformations that register each image with the 550 nm image. We used code provided by Heeger that implements an algorithm described by Bergen et al. [8]. For more information on the registration software, go here. We estimated the transformations from a 512 by 512 pixel subimage that contained the resolution target in the upper center of the image, then applied these transformations to the whole images. We did this because the image registration algorithm, which was not designed to handle color, works robustly across wavelength only if applied to image regions that contain predominantly luminance modulation. To see the effect of image registration, go here.
2. Note that because of the registration proceedure, there is some distortion around the edges of the images. You should trim the edges off before performing any quantitative analyses.
3. The BearFruit1B and BearFruit1Y images are not exactly registered with each other. They were taken on different days and the camera was repositioned between the two days. The objects in the scene were not moved. The fruit was presumably a little riper when the BearFruit1Y image was taken.
4. For the BearFruit1Y image, we checked the camera spectral calibration by measuring reference spectra at three locations other than the white panel and comparing these spectra with the spectra derived from the hyperspectral image data. These comparisons can be viewed here.
5. We have not yet fully characterized the optical MTF of our camera system. Since we refocus for each individual wavelength, chromatic aberrations are probably minimized. The image of the resolution target could probably be used to estimate the spatial MTF at each wavelength.
6. The spatial resolution of the camera is 102.4 pixels per degree of visual angle.
7. There are some problems with stray light with our system, particularly in the short wavelength regions of these images. This has the effect of reducing image contrast. We will do a better job of calibrating and correcting for these problems in the future data sets. Also note that the apparent image contrast is strongly influenced by monitor gamma. So if your monitor doesn't match ours, you will want to re-render the LMS images from the hyperspectral data.
Listing of MATLAB code to read raw image
format
Plots of illuminant spectra for BearFruit1B and BearFruit1Y
Spectral calibration check for BearFruit1Y
Effect of image registration
Obtain
Heeger's registration code
References
J. E. Farrell, E. A. Harding, J. M. Kraft, M. D. Rutherford, J. D. Tietz, and P. L. Vora helped with camera design, camera calibration, and/or image acquisition. D. J. Heeger provided the image registration code. The work was supported primarily by a philanthropic gift from the Hewlett-Packard Corporation.