In the construction industry, many projects involve remodeling or refurbishing existing buildings, and such jobs often face delays or cost overruns when hidden problems emerge.
“Renovation projects live and die by the quality of information,” according to Martin Fischer, a Stanford professor of civil and environmental engineering.
Go to the web site to view the video.
Newer buildings often have computerized blueprints and records, including details such as the number of rooms, doors and windows, and the square footage of floors, ceilings and walls. But such information may not exist for older buildings, necessitating the time-consuming and difficult task of collecting these details manually.
Now, Stanford researchers have automated the process of getting detailed building information.
Envisioning whole buildings
The system they developed relies upon existing 3-D sensing technologies. These sensors use light to measure every feature of a building’s interior, room by room and floor by floor, to create a massive data file that captures the spatial geometry of any building.
The novelty is in how the Stanford process feeds the raw data file captured by the sensors into a new computer vision algorithm. The algorithm automatically identifies structural elements such as walls and columns, as well as desks, filing cabinets and other furnishings.
“People have been trying to do this on a much smaller scale, just a handful of rooms,” said Silvio Savarese, a Stanford assistant professor in computer science. “This is the first time it’s possible to do it at the scale of whole buildings, with hundreds of rooms.”
The researchers presented their work Tuesday at the IEEE’s Conference on Computer Vision and Pattern Recognition. (IEEE stands for the Institute of Electrical and Electronics Engineers.) They foresee many applications beyond renovation, such as giving facilities managers a system to capture and store a myriad of details about building interiors.
The new process is the brainchild of Stanford doctoral student Iro Armeni, with interdisciplinary oversight from Savarese, who leads the Computational Vision and Geometry Lab, and Fischer, who heads the Center for Integrated Facility Engineering.
As is the case with many innovations, this one grew out of frustration.
‘A better way’
Before joining Stanford, Armeni had been an architect on the Greek island of Corfu. In that capacity she performed custom renovations on historical buildings hundreds of years old. On such jobs, Armeni and her colleagues used tape measures to redraw building plans, a universally common practice that is both time-consuming and often inaccurate.
“I knew there should be a better way, and I started my doctorate looking into ways I could deal with this problem,” Armeni said.
She began by replacing her tape measure with laser scanners and 3-D cameras. These instruments use light to take measurements with up to millimeter accuracy. When placed inside a building they send out pulses of light in all directions, bathing every interior surface.
By recording precisely how long it takes for a beam of light to hit a given point in the room and bounce back, these instruments create a data file consisting of literally millions of measurements – specific points where beams of light encountered some surface; it could be the edge of a table or a spot on the wall. This massive data file is called a raw point cloud.
Point cloud
As 3-D scanning technologies have come down in price, construction companies have started using them to collect point clouds. But humans still had to look at the point cloud on a computer screen to identify building elements such as windows, walls, hallways and furniture and then type that information into their software tools. To the computer itself, the point cloud was an indistinguishable mass of data.
The Stanford team’s innovation was developing a computer vision system that could analyze the point cloud for a building, distinguish the rooms, and then categorize each element in each room. This automated the second half of the process, the need for humans to annotate the data.
Obviously, buildings vary in many ways, including room size, purpose and interior decoration. This is where machine learning and computer vision came in. Machine learning is a technology that enables an algorithm to learn how to recognize patterns.
To train their computer vision system, the researchers collected a great amount of 3-D point cloud data that humans had annotated. These annotations specified all sorts of building features. Armeni managed the considerable task of feeding this annotated point cloud data to the algorithm.
Through repetition, the computer vision system trained itself to recognize different building elements. Ultimately, the researchers created an algorithm that can analyze raw point cloud data from an entire building and, without human assistance, identify the rooms, enter each room, and detail the structural elements and furniture.
In this way the computer vision system produces a detailed map of the building interior that is useful as a basis for design or redesign.
“This kind of geometric, contextual reasoning is one of the most innovative parts of the project,” Savarese said.
When Armeni presented her work at a recent conference, Fischer said the professionals were impressed that the algorithm can automatically extract useful building information from point cloud data acquired by existing 3-D sensing devices.
“They realized what she had accomplished,” he said.
Other members of the team include postdoctoral scholar Amir Zamir and doctoral student Ozan Sener. Next the researchers plan to further develop the algorithm by providing a way for professionals with raw point cloud data to upload their files and receive the automatically generated results.
In the future, Armeni hopes to create an algorithm that can track the whole life cycle of a building – through design, construction, occupation and demolition.
“As engineers, we shouldn’t lose time trying to find the current status of our building,” Armeni said. “We should invest this time in doing something creative and making our buildings better.”
To inquire about testing on your own 3-D point cloud data from a building project, contact Iro Armeni at iarmeni@cs.stanford.edu.
Media Contacts
Tom Abate, Stanford Engineering: (650) 736-2245, tabate@stanford.edu;
Clifton B. Parker, Stanford News Service: (650) 725-0224, cbparker@stanford.edu