In order for robots to circulate on sidewalks and mingle with humans in other crowded places, they’ll have to understand the unwritten rules of pedestrian behavior. Stanford researchers have created a short, non-humanoid prototype of just such a moving, self-navigating machine.
The robot is nicknamed “Jackrabbot” – after the jackrabbits often seen darting across the Stanford campus – and looks like a ball on wheels. Jackrabbot is equipped with sensors to be able to understand its surroundings and navigate streets and hallways according to normal human etiquette.
Go to the web site to view the video.
The idea behind the work is that by observing how Jackrabbot navigates itself among students around the halls and sidewalks of Stanford’s School of Engineering, and over time learns unwritten conventions of these social behaviors, the researchers will gain critical insight in how to design the next generation of everyday robots such that they operate smoothly alongside humans in crowded open spaces like shopping malls or train stations.
“By learning social conventions, the robot can be part of ecosystems where humans and robots coexist,” said Silvio Savarese, an assistant professor of computer science and director of the Stanford Computational Vision and Geometry Lab.
The researchers will present their system for predicting human trajectories in crowded spaces at the Computer Vision and Pattern Recognition conference in Las Vegas on June 27.
As robotic devices become more common in human environments, it becomes increasingly important that they understand and respect human social norms, Savarese said. How should they behave in crowds? How do they share public resources, like sidewalks or parking spots? When should a robot take its turn? What are the ways people signal each other to coordinate movements and negotiate other spontaneous activities, like forming a line?
These human social conventions aren’t necessarily explicit nor are they written down complete with lane markings and traffic lights, like the traffic rules that govern the behavior of autonomous cars.
So Savarese’s lab is using machine learning techniques to create algorithms that will, in turn, allow the robot to recognize and react appropriately to unwritten rules of pedestrian traffic. The team’s computer scientists have been collecting images and video of people moving around the Stanford campus and transforming those images into coordinates. From those coordinates, they can train an algorithm.
“Our goal in this project is to actually learn those (pedestrian) rules automatically from observations – by seeing how humans behave in these kinds of social spaces,” Savarese said. “The idea is to transfer those rules into robots.”
Jackrabbot already moves automatically and can navigate without human assistance indoors, and the team members are fine-tuning the robot’s self-navigation capabilities outdoors. The next step in their research is the implementation of “social aspects” of pedestrian navigation such as deciding rights of way on the sidewalk. This work, described in their newest conference papers, has been demonstrated in computer simulations.
“We have developed a new algorithm that is able to automatically move the robot with social awareness, and we’re currently integrating that in Jackrabbot,” said Alexandre Alahi, a postdoctoral researcher in the lab.
Even though social robots may someday roam among humans, Savarese said he believes they don’t necessarily need to look like humans. Instead they should be designed to look as lovable and friendly as possible. In demos, the roughly three-foot-tall Jackrabbot roams around campus wearing a Stanford tie and sun-hat, generating hugs and curiosity from passersby.
Today, Jackrabbot is an expensive prototype. But Savarese estimates that in five or six years social robots like this could become as cheap as $500, making it possible for companies to release them to the mass market.
“It’s possible to make these robots affordable for on-campus delivery, or for aiding impaired people to navigate in a public space like a train station or for guiding people to find their way through an airport,” Savarese said.
The conference paper is titled “Social LSTM: Human Trajectory Prediction in Crowded Spaces.” See conference program for details.
Media Contacts
Tom Abate, Associate Director of Communications, Stanford Engineering: (650) 736-2245, tabate@stanford.edu