The Science of Color

Published Oct. 21, 2012

RIT's Munsell Color Science Laboratory is in the business of decoding appearance, measuring color, texture, gloss, and translucency, and gauge perception to understand why materials look the way they do. Perception experiments quantify the human response to appearance, reducing it to mathematical models that feed back into our broader understanding of the science of color.

Combining the Real and the Digital

"We're one of the few labs in the world that is doing all of this, particularly the idea of not just using physical parameters as metrics but adding in the last step, to have people look at the results and ask, 'Are we really measuring what we think we are measuring?'" says Roy Berns, the Richard S. Hunter Professor in Color Science, Appearance and Technology, and director of the Munsell Color Science Lab. "Measuring our chromatic world and relating it to the human experience through mathematical modeling is the backbone of color science at RIT."

In 1985, the master's degree program in color science and the Munsell Color Science Laboratory moved into the new Chester F. Carlson Center for Imaging Science. The young program centered around Berns and Mark Fairchild, now associate dean for research and graduate education in the College of Science, whose research put RIT on the color science map.

"The Munsell Lab has a long history and a distinguished reputation in color science," Berns says. "Being the only color science graduate program in the U.S., and being here for so long, it really has populated the field. Anywhere you go now having to do with color science, you'll run into someone who has a connection with RIT."

Restoring The Bedroom: Working with the conservation, curatorial, and imaging departments at the Van Gogh Museum in Amsterdam, Roy Berns assisted in the digital reconstruction of The Bedroom, bringing back the colors that Van Gogh described to Paul Gauguin as “walls of pale violet... the doors lilac.” (Reproduced with permission from the Van Gogh Museum.)

In 2007, RIT added a doctoral program in color science. The addition of two faculty members—Jim Ferwerda and Jinwei Gu—brought a new dimension to the Munsell Lab. Ferwerda, who arrived in 2007 with a strong background in computer graphics, is pushing the limits of what's possible with display systems and three-dimensional visualization. Gu, who came on board in 2010, moves the department further toward computer science with computational photography—a blend of computer vision and computer graphics.

"The four of us complement each other," Fairchild says. "Jim and Jinwei bring some new things that Roy and I wouldn't have done on our own, but yet they are not so completely different. It's four different approaches on similar topics."

From Van Gogh to Seurat

Berns' research efforts include color and imaging science projects in support of art conservation and also the documen- tation of national treasures. He has developed novel imaging techniques for museums, funded principally from the Andrew W. Mellon Foundation, and patented amultispectral camera system to make high-color-accuracy image archives.

Berns is active in what he calls "digital rejuvenation," in which images, such as Vincent van Gogh's The Bedroom (pictured above), are produced simulating the original appearance of paintings and drawings that have changed dramatically in color over time. In 2003-2004, Berns led a team of conservators and photographers in creating a full-sized printed digital simulation of Georges Seurat's A Sunday on La Grande Jatte for the Art Institute of Chicago's exhibition, "Seurat and the Making of La Grande Jatte."

More recently, Berns imaged Vincent van Gogh's Starry Night at the Museum of Modern Art. Berns used two different systems and his patented camera approach to mathematically derive information about the painting's color, surface topography, and gloss.

Rediscovering Starry Night: Munsell scientists Dave Wyble, Roy Berns, and Tongo Chen partner with the Museum of Modern Art to develop new methods to image their stellar collection such as Vincent van Gogh’s iconic painting Starry Night.

"Combining all this information gives us the ability to do the computer graphics on Starry Night, so that you can now start to look at it from different angles on your display," Berns says. "You can look at the topography or the three-dimensionality. So we have this system that enables us to look at Starry Night as a 3D object in space. No one has ever done this kind of imaging work on paintings of this size."

High Dynamic Range Imaging

Berns' work overlaps Fairchild's interest in High Dynamic Range (HDR) imaging. HDR allows for the imaging of a greater dynamic range between the lightest and darkest areas of an image, providing greater detail and imaging data than traditional technologies.

The team is currently working to merge Liquid Crystal Display (LCD) technology which is used in flat panel displays, with HDR to create novel imaging systems.

The team starts by prying open LCDs, removing the uniform light and replacing it with a projector. The HDR display creates an intensity of lights and darks by layering an image upon itself in a process known as "double modulation." The projected image overlaps and enhances the scene on the LCD display.

"The two images multiply so the dynamic range increases by that multiplication," Fairchild says. "It's as though you're looking at a projector head on."

Fairchild spent six months traveling the United States, collecting HDR images. He shot nine separate exposures of each scene, from overexposed to under- exposed, capturing bright to dim light and everything in between.

"The high dynamic range takes all of these together and uses the best infor- mation out of each image to make one that includes the whole range of light," he says.

Fairchild collected content as resource material and published the results in The HDR Photographic Survey in 2008.

Imaging the Golden Gate Bridge with HDR: Mark Fairchild captured the dynamic range of the Golden Gate Bridge and San Francisco from the Marin Headlands as part of his High Dynamic Range (HDR) Photographic Survey.

"It's not just a matter of capturing it, but a matter of how to render it, how to put it on a display or a piece of paper," he says. "That's where our research came in."

Creation, display, and perception of moving images also intrigue Fairchild, who's developing a research program that could hold potential applications in the entertainment industry. He is exploring spectral imaging in digital cinema with David Long, assistant professor in RIT's School of Film and Animation and a doctoral student in color science.

Dolby Laboratories supported some of Fairchild's early digital cinema research and hired imaging science alumnus Mahdi Nezamabadi, who did his doctoral work with Berns on compression and picture quality. Interest in this area is growing. Fairchild and scientists at Technicolor are discussing a new project on human perception of displays in cinema.

"Work with corporations like Sony, Dolby, and Intel has laid the groundwork for ongoing research that examines how fundamental properties and limitations of the human visual system can be exploited to create more compelling visual experiences in cinema with more realistic color appearance," Fairchild adds.

Modeling Appearance

Fairchild spent part of 1998 on sabbatical at Cornell, where he first met Ferwerda, now the Xerox Endowed Chair in the Center for Imaging Science. Ferwerda has brought to the Munsell Lab his expertise in computer graphics and in modeling properties of appearance—color, gloss, texture, and translucency.

Digital Painting: Jim Ferwerda bridges the gap between the real and digital worlds with his tablet-based digital painting application. “TangiPaint” simulates brushstrokes and texture and allows the artist to manipulate surface orientation and lighting.

"We have a scientific foundation, which includes physical measurement, mathematical modeling, rendering, and perception, and trying to understand the connection between the physical side of things and the perceptual side of things— the appearance," Ferwerda says. "The clear next steps from color are into a more comprehensive study of appearance."

Ferwerda's work quantifying gloss perception has led to connections with people in industry and research for General Motors, Sherwin-Williams, and Hewlett Packard. His master's degree student Alicia Stillwell is currently working on a project for Corning to determine how anti-reflection and anti-glare surfaces for displays impact image quality.

Advanced display systems are another area of interest for Ferwerda. He is using mobile devices, such as laptops and iPads, to create "tangible displays" that know their orientation in the environment.

"It responds to light the same way a real surface would," Ferwerda says.

His custom apps produce images that simulate changes in surface lighting and appearance on a display screen. TangiPaint, for instance, is a digital painting application that allows the user to create digital paintings with simulated properties of oil paintings—impasto, texture, and gloss.

Advancing Computational Photography

Gu joined the color science program after completing his doctorate in computer science at Columbia University. An interest in computer games led him to computer graphics and computer vision and the quest for photo-realistic imagery. He grew enamored of the physics of surface reflection and the interaction of light with different objects. His doctoral research led him to discover the Munsell Lab.

Appearance Capture Through Computational Illumination: Computational illumination utilizes optimal coded illumination patterns to capture unconventional but information-rich images of real-world objects, which can then be “decoded” via computation. Jinwei Gu uses the technology in his LED-based multispectral dome, which uses coded illumination to recover geometric and material data to more efficiently sort scrap metals for recycling.

Gu gravitates toward research problems that detour from traditional color science topics like textiles and paints. An article about sorting and classifying materials for recycling using expensive and slow machines inspired him to develop a computer imaging system that automatically separates scrap metals for recycling based on light reflected off the material.

"My current research is focused on appearance capture," Gu says. "How are you going to use computational cameras and computational illumination sources to efficiently measure reflectance and three-dimensional shapes?"

Gu is currently developing a novel camera for Xerox that controls the exposure of each individual pixel for high-speed imaging.

"In this project, we developed a coding scheme where after you capture an exposure-coded image and it is decoded, you can recover a video from a single frame," he says. "Pixels that capture an event have different time. We are sampling space-time volume more efficiently than with regular cameras."

Originally published in Research at RIT. View their website for more research stories.