Stanford researchers devise way to see through clouds and fog

Using a new algorithm, Stanford researchers have reconstructed the movements of individual particles of light to see through clouds, fog and other obstructions.

Like a comic book come to life, researchers at Stanford University have developed a kind of X-ray vision – only without the X-rays. Working with hardware similar to what enables autonomous cars to “see” the world around them, the researchers enhanced their system with a highly efficient algorithm that can reconstruct three-dimensional hidden scenes based on the movement of individual particles of light, or photons. In tests, detailed in a paper published Sept. 9 in Nature Communications, their system successfully reconstructed shapes obscured by 1-inch-thick foam. To the human eye, it’s like seeing through walls.

A three-dimensional reconstruction of the reflective letter “S,” as seen through the 1-inch-thick foam. (Image credit: Stanford Computational Imaging Lab)

“A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible,” said Gordon Wetzstein, assistant professor of electrical engineering at Stanford and senior author of the paper. “This is really pushing the frontier of what may be possible with any kind of sensing system. It’s like superhuman vision.”

This technique complements other vision systems that can see through barriers on the microscopic scale – for applications in medicine – because it’s more focused on large-scale situations, such as navigating self-driving cars in fog or heavy rain and satellite imaging of the surface of Earth and other planets through hazy atmosphere.

Supersight from scattered light

In order to see through environments that scatter light every-which-way, the system pairs a laser with a super-sensitive photon detector that records every bit of laser light that hits it. As the laser scans an obstruction like a wall of foam, an occasional photon will manage to pass through the foam, hit the objects hidden behind it and pass back through the foam to reach the detector. The algorithm-supported software then uses those few photons – and information about where and when they hit the detector – to reconstruct the hidden objects in 3D.

The laser scanning process in action. Single photons that travel through the foam, bounce off the “S,” and back through the foam to the detector provide information for the algorithm’s reconstruction of the hidden object. (Image credit: Stanford Computational Imaging Lab)

This is not the first system with the ability to reveal hidden objects through scattering environments, but it circumvents limitations associated with other techniques. For example, some require knowledge about how far away the object of interest is. It is also common that these systems only use information from ballistic photons, which are photons that travel to and from the hidden object through the scattering field but without actually scattering along the way.

“We were interested in being able to image through scattering media without these assumptions and to collect all the photons that have been scattered to reconstruct the image,” said David Lindell, a graduate student in electrical engineering and lead author of the paper. “This makes our system especially useful for large-scale applications, where there would be very few ballistic photons.”

In order to make their algorithm amenable to the complexities of scattering, the researchers had to closely co-design their hardware and software, although the hardware components they used are only slightly more advanced than what is currently found in autonomous cars. Depending on the brightness of the hidden objects, scanning in their tests took anywhere from one minute to one hour, but the algorithm reconstructed the obscured scene in real-time and could be run on a laptop.

“You couldn’t see through the foam with your own eyes, and even just looking at the photon measurements from the detector, you really don’t see anything,” said Lindell. “But, with just a handful of photons, the reconstruction algorithm can expose these objects – and you can see not only what they look like, but where they are in 3D space.”

Space and fog

A three-dimensional reconstruction of the reflective letter “S,” as seen through the 1-inch-thick foam. (Image credit: Stanford Computational Imaging Lab)

Someday, a descendant of this system could be sent through space to other planets and moons to help see through icy clouds to deeper layers and surfaces. In the nearer term, the researchers would like to experiment with different scattering environments to simulate other circumstances where this technology could be useful.

“We’re excited to push this further with other types of scattering geometries,” said Lindell. “So, not just objects hidden behind a thick slab of material but objects that are embedded in densely scattering material, which would be like seeing an object that’s surrounded by fog.”

Lindell and Wetzstein are also enthusiastic about how this work represents a deeply interdisciplinary intersection of science and engineering.

“These sensing systems are devices with lasers, detectors and advanced algorithms, which puts them in an interdisciplinary research area between hardware and physics and applied math,” said Wetzstein. “All of those are critical, core fields in this work and that’s what’s the most exciting for me.”

Gordon Wetzstein is also director of the Stanford Computational Imaging Lab and a member of Stanford Bio-X and the Wu Tsai Neurosciences Institute.

This research was funded by a Stanford Graduate Fellowship in Science and Engineering; the National Science Foundation; a Sloan Fellowship; Defense Advanced Research Projects Agency (DARPA); the Army Research Office (ARO), an element of the U.S. Army Combat Capabilities Development Command’s Army Research Laboratory; and the King Abdullah University of Science and Technology (KAUST).

To read all stories about Stanford science, subscribe to the biweekly Stanford Science Digest.