Autonomous car research at Stanford
Researchers all over Stanford are working on our driverless future. From exploring complex ethics questions, to developing leading-edge technologies, to real-world testing of custom autonomous cars, Stanford researchers are striving to help ensure the safety of driverless vehicles.
Slowly rolling out onto city streets, self-driving cars are testing their driving chops in Silicon Valley, Helsinki, London and a few dozen other isolated locations around the globe – and the expectation is that their numbers will only swell.
With those eerily empty cars come questions about everything from traffic patterns (will we still need stop lights if cars can communicate?) to insurance (who pays for an autonomous car’s accident?).
And then there’s the question of safety.
“Computers don’t get drunk,” said Stephen Zoepf, executive director of the Center for Automotive Research at Stanford (CARS). “There are a sweeping group of accidents that will go away.” But we still don’t know the kinds of mistakes autonomous cars will make instead.
It’s these kinds of questions – and the mechanics, algorithms and policies that go with them – that need to be resolved before humans can completely kick their respective feet up on the dashboard and zone out.
Zoepf and Chris Gerdes, who directs both CARS and the Revs program at Stanford, have said we’re about 90 percent of the way to our driverless future. It’s the remaining 10 percent that teams of Stanford faculty and students from across engineering, psychology and law are working to address.
Driverless cars in the world
Removing a driver from behind the wheel takes away more than just the physical responses. It also eliminates the complex decision-making that goes into even routine journeys – choosing whether to swerve into a neighboring lane to avoid a possible obstacle or navigating ambiguous intersections.
These kinds of decisions come down to algorithms that emulate a driver’s morals, but who gets to decide what those are? Engineers? Policymakers? Car manufacturers? Those conversations are taking place now among ethicists, philosophers and engineers who are debating these issues even as the new standards are being developed.
Mechanical eyes and reflexes
Backup cameras. Parking assist. Collision alert. Even mid-range cars today are loaded with cameras, sensors and technologies that make driving safer, but they still rely on a human in the passenger seat.
Taking these technologies to a level where they could independently and safely control all aspects of a car’s journey will require next-generation tools to act as the car’s eyes, ears and even its reflexes, making split second decisions with often ambiguous information.
Some of those new technologies are under development now, but many won’t come out of car research, per se, but from decades of work on autonomous robots roaming far out of human sight on Earth and across the solar system. These free-wheeling robots need the same kinds of powerful cameras and sensors as cars to keep them safe and on task. Other technologies come from work in batteries and solar cells, which may eventually power these cars, or from advances in computer imaging that allow cars to differentiate between lethal obstacles and fluttering plastic bags.
The next generation of scientists, engineers, programmers and, sometimes, welders are honing their skills as students working on Stanford’s own fleet of autonomous cars.
Testing their work on the track, the group is perfecting not just the mechanics of how their cars operate but the algorithms for steering the cars with the fluidity of a professional driver. Although track racing isn’t the group’s ultimate goal, algorithms that can safely navigate tight turns at high speeds or compensate for variable traction on the fly are more prepared to handle the rigors of city streets and dicey weather.