Stanford students program autonomous robots that mimic self-driving cars

Students programmed robots to autonomously navigate an unknown cityscape and aid in a simulated rescue of animals in peril in a class that mimics the programming needed for autonomous cars or robots of the future.

Rolling along at a cautious pace, a robot about the size of a milk jug maps its surroundings in a toy-size city. It pauses, spins and records more information about its environment – a fence here, a wall there – all while keeping watch for animals in need of rescue.

The robots and their rescue mission are all part of the final demonstration day for Stanford University students in the class Principles of Robotic Autonomy.

Go to the web site to view the video.

Kurt Hickman

Students in the class Principles of Robotic Autonomy built the different components of the autonomy software during the quarter and then worked in teams to integrate such components and deploy them into their robots.

“These robots are small but they contain a representative set of sensors that you would see on a real self-driving car,” said Marco Pavone, assistant professor of aeronautics and astronautics at Stanford and instructor for the course. “So, in a way, it is a sort of miniature city where self-driving robots are moving around in a way that is analogous to how a real self-driving car would behave in the world.”

For this demonstration, the robots’ full mission was to map the miniature city, find animals along the way, report back to an emergency responder headquarters and then return to each animal, as though guiding the rescue crew.

The compact robots were outfitted with laser sensors to help them locate and record obstacles, cameras to enable detection of animals – in this case represented by photos of cats, dogs and one elephant – and an on-board computer. Student teams programmed their robots to work at varying levels of autonomy, using industry-standard software and image classification developed through deep learning algorithms.

Wide-ranging skills and successes

The class is open to graduate students and ambitious undergraduates, not restricted to engineering. Students learn skills that cross disciplines, delving into the math behind the algorithms they coded and learning how to program those mathematical concepts into their robots.

“I knew a little bit about everything before but being able to implement it on a real robot and seeing the challenges that happened – like getting the navigator to play nice with the image detector – has been really fun,” said Laura Matloff, a graduate student in mechanical engineering who took the course this winter.

The students incrementally built the different components of the autonomy software during the quarter and then worked in teams to integrate such components and deploy them into their robots.

“All of these components work well on their own but putting them together is what tends to be difficult,” said Benoit Landry, a graduate student in aeronautics and astronautics and teaching assistant for the class. “So that’s really something that we’re trying to emphasize: How do you piece all of these complicated parts together to make a whole that works?”

Beyond convincing an elaborate array of hardware and software components to act in harmony, where teams really displayed their creativity was in how their robots dealt with the unexpected. In addition to programming their robot to seek out unmapped areas, Matloff’s team members made sure their robot paused occasionally, assessing whether its onboard processing and understanding of the world had caught up to where its wheels had carried it.

Another team implemented a backtracking strategy for when the robot realized something was amiss between its internal map and what its sensors were seeing. A third looked out for a rogue robot zooming by carrying an image of a bike – a stand-in for a cyclist. Upon seeing the faux bike, the robot would stop and play a snippet of the song “Bicycle Race” while it waited for the rogue robot to pass out of its way safely.

Accelerated learning

Much of the excitement around this class comes from the fact that it is one of the few opportunities for students to have a hands-on project that requires them to make the jump from computer simulation to real, complex hardware systems. It is a project that would have been nearly impossible to assign to a class a decade ago.

“Just a few years ago, this kind of project would have required large teams of researchers and significant investments,” said Pavone, whose lab develops planning, decision-making and artificially intelligent algorithms for autonomous robots, such as self-driving cars, drones and autonomous spacecraft. “Now, leveraging a variety of tools recently developed by the robotics community, we can teach it in a quarter to undergraduates. This is a testament of how quickly this field is progressing.”

In fact, Pavone, who has taught this class for two years now, plans to make this course part of the core curriculum of the aeronautics and astronautics undergraduate major, which is brand new this year. Given the increasing applications of autonomous technology bound for our roads, oceans, sky and space, he believes the knowledge students take away from this class is highly relevant across many areas of engineering.

Pavone is also an assistant professor, by courtesy, of electrical engineering. Principles of Robotic Autonomy was supported in part by CARS, the Center for Automotive Research at Stanford, and Velodyne LiDAR, Inc.