RIT, along with industry partners, has been researching how to create automatic high-quality 3D models or reconstructions using airborne imagery and algorithm-based methodology. Imaging scientists and graduate students from the Chester F. Carlson Center for Imaging Science along with MBA students and professors from the Saunders College of Business have been working with Exelis, Pictometry International, and Lockheed Martin on the Consortium for 3D Innovation. The alliance is funded by a $1 million National Science Foundation (NSF) Accelerating Innovation Research (AIR) grant and matching funds from the corporate partners.
Google and Apple offer apps in which one can see 3D models of urban features and landscapes. Such a model involves a labor-intensive, expensive process that requires using photographs, analysts, and artists to fill in the details.
"If searching for a landmark like the Empire State Building, if you looked close enough it appears to be an artist's photorealistic rendition on the side of the building," says Carl Salvaggio, professor in the Digital Imaging Remote Sensing Laboratory. "An analyst/artist has gone in and picked points on the building to develop and refine a 3D model and then supplemented it with photorealistic renderings. You are never going to be able to reconstruct every feature or city in the world if someone has to do it manually."
The models could be used in various industries such as national defense, emergency response, urban planning, and agriculture.
"There is a need for visualization of forests to understand the ecological implications of deforestation and associated carbon emissions/sequestration events, or for modeling 3D species habitats, among other applications. 3D models of vegetation, for instance, will become increasingly important in the future for municipalities needing to map urban areas and forests," says Jan van Aardt, associate professor in the Digital Imaging and Remote Sensing Group.
MBA students brainstormed potential market segments. After building a scoring matrix and conducting in-depth research with potential customers, they determined the top four market segments are first responders; viewshed analysis—for example, placement of cellphone towers; the gaming industry; and the military.
Steve Schultz, co-founder of Pictometry and the company's chief technology officer, says Pictometry sells 3D models to various industries, including first responders, government agencies, school districts, and urban planners. However, it's an expensive product because of the labor involved.
"From this project, we are picking the best algorithms and the ultimate goal is to develop fully automated 3D models," says Schultz. "If we could automate 60 to 70 percent of the work and have to manually create only 30 to 40 percent of the work, we've generated savings that can be passed on to the customer."
Noah Snavely, a professor at Cornell and the co-inventor of Microsoft's Photosynth, partnered with RIT on this project. Snavely is one of the pioneers in generating 3D images from multiview 2D imagery. His work has mined the Web for millions of images of the city of Rome and created 3D reconstructions using an algorithm known in the computer vision community as "structure from motion." This algorithm automatically determines the camera placement and the direction in which each image was shot, as well as the 3D structure of the scene. RIT took Snavely's concept a step further, by extracting 3D coordinates from airborne image pixels and generating 3D "point clouds," and finally, figuring out how to put the data onto a map of the world. For example, one of Salvaggio's imaging science Ph.D. students, Shaohui Sun, produced a 3D model of the buildings in downtown Rochester from point cloud data extracted from aerial imagery.
"Shaohui created the 3D model with only geometry," says Salvaggio. "To be able to reconstruct the complex rooftops, like the one on the Bausch and Lomb Building for example, with no artist involved in the process, that's huge. When you consider the need for transformative science, as the NSF requires, this is what we are doing here."