AI system developed by Stanford researchers identifies buildings damaged by wildfire

A deep learning approach to classifying buildings with wildfire damage may help responders focus their recovery efforts and offer more immediate information to displaced residents.

People around the globe have suffered the nerve-wracking anxiety of waiting weeks or months to find out if their homes have been damaged by wildfires that scorch with increased intensity. Now, once the smoke has cleared for aerial photography, researchers have found a way to identify building damage within minutes.

The DamageMap application identifies buildings as damaged in red or not damaged in green. Researchers developed the platform to provide immediate information about structural damage following wildfires. (Image credit: Galanis et al.)

Through a system they call DamageMap, a team at Stanford University and the California Polytechnic State University (Cal Poly) has brought an artificial intelligence approach to building assessment: Instead of comparing before-and-after photos, they’ve trained a program using machine learning to rely solely on post-fire images. The findings appear in the International Journal of Disaster Risk Reduction.

“We wanted to automate the process and make it much faster for first responders or even for citizens that might want to know what happened to their house after a wildfire,” said lead study author Marios Galanis, a graduate student in the Civil and Environmental Engineering Department at Stanford’s School of Engineering. “Our model results are on par with human accuracy.”

The current method of assessing damage involves people going door-to-door to check every building. While DamageMap is not intended to replace in-person damage classification, it could be used as a scalable supplementary tool by offering immediate results and providing the exact locations of the buildings identified. The researchers tested it using a variety of satellite, aerial and drone photography with at least 92 percent accuracy.

“With this application, you could probably scan the whole town of Paradise in a few hours,” said senior author G. Andrew Fricker, an assistant professor at Cal Poly, referencing the Northern California town destroyed by the 2018 Camp Fire. “I hope this can bring more information to the decision-making process for firefighters and emergency responders, and also assist fire victims by getting information to help them file insurance claims and get their lives back on track.”

A different approach

Most computational systems cannot efficiently classify building damage because the AI compares post-disaster photos with pre-disaster images that must use the same satellite, camera angle and lighting conditions, which can be expensive to obtain or unavailable. Current hardware is not advanced enough to record high-resolution surveillance daily, so the systems can’t rely on consistent photos, according to the researchers.

Rather than looking for differences between before-and-after images, DamageMap first uses pre-fire photos of any type to map the area and pinpoint building locations. Then, the program analyzes post-wildfire images to identify damage through features like blackened surfaces, crumbled roofs or the absence of structures.

“People can tell if a building is damaged or not – we don’t need the before picture – so we tested that hypothesis with machine learning,” said co-author Krishna Rao, a graduate student in Earth system science at Stanford’s School of Earth, Energy & Environmental Sciences (Stanford Earth). “This can be a powerful tool for rapidly assessing damage and planning disaster recovery efforts.”

Structural damage from wildfires in California is typically divided into four categories: almost no damage, minor damage, major damage or destroyed. Because DamageMap is based on aerial images, the researchers quickly realized the system could not make assessments to that degree of detail and trained the machine to simply determine if fire damage was present or absent.

Opportunities for growth

Because the team used a deep learning technique called supervised learning, their model can continue to be improved by feeding it more data. They tested the application using damage assessments from Paradise, California, after the Camp Fire and the Whiskeytown-Shasta-Trinity National Recreation Area after the Carr Fire of 2018. The researchers said the open-source platform can be applied to any area prone to wildfires and hope it could also be trained to classify damages from other disasters, such as floods or hurricanes.

“So far our results suggest that this can be generalized, and if people are interested in using it in real cases, then we can keep improving it,” Galanis said.

Galanis and Rao developed the project during Stanford’s 2020 Big Earth Hackathon: Wildland Fire Challenge. They later collaborated with Cal Poly researchers to refine the platform, a connection that resulted from Rao and Frickers’ participation in Google’s 2019 ″Geo For Good” conference, where the two built an initial prototype as part of the conference Build-A-Thon.

The co-authors tested their model results against damage data collected on-site by California Department of Forestry and Fire Protection (CAL FIRE) agents – information that made the research possible.

“Damage inspectors went through painstaking efforts going door to door, looking at the damage, geotagging locations and finally making it publicly accessible,” Rao said. “Researching or innovating future technologies directly hinges on access to such data.”

Study co-authors include Xinle Yao and Yi-Lin Tsai from the Department of Civil and Environmental Engineering at Stanford and Jonathan Ventura from Cal Poly.

The research was supported by the California Polytechnic State University, San Luis Obispo 2019 Research, Scholarly, and Creative Activities Grant Program, the NASA Earth and Space Science Fellowship and the Stanford Data Science Scholarship.

To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest.