Stanford’s JackRabbot 2: The polite pedestrian robot

Like its predecessor, JackRabbot 2 is learning how to navigate safely through spaces occupied by people, following the rules of human etiquette. What it learns could help it move comfortably among us in the future.

If you’ve ever been caught in the dreaded sidewalk tango, failing to gracefully maneuver around someone directly in your path, you have some idea of the challenges ahead for a Stanford University robot, JackRabbot 2. With a squat body, friendly eyes and a two-fingered arm, JackRabbot 2 is tasked with learning all it can about moving with, around and between humans.

jackrabbot 2

JackRabbot 2 has an arm and expressive eyes, which can help communicate the robot’s next move to people around it. (Image credit: Amanda Law)

The new robot – an update of the stylish JackRabbot – is part of a field of research interested in getting robots closer to humans so they can work as generalized personal helpers: delivering packages, cleaning house and grabbing a snack from the fridge.

“The JackRabbot project is developing a robot that doesn’t just navigate an environment by following the behavior of a traditional robotic system, such as going from Point A to Point B and avoiding bumping into obstacles,” said Silvio Savarese, associate professor of computer science, who leads the JackRabbot project. “We want a robot that is also aware of the surroundings and the social aspects of human-robotics interactions, so it can move among humans in a more natural way.”

Right now, the researchers are testing the robot’s new features – its predecessor had neither an arm nor a face – and continuing to gather a massive amount of detailed data for the algorithm that will inform JackRabbot 2’s socially aware autonomous navigation. The Stanford Vision and Learning Lab, home to JackRabbot 2, officially unveiled the new robot at an on-campus event Sept. 20.

No, you first

Robotic arms are usually used to interact with physical objects, such as opening a door or picking up a container. JackRabbot 2’s arm will be capable of these actions but will also convey intention – signaling a person to go ahead or stop with hand gestures. Facial expressions and sounds will also help JackRabbot communicate with the people around it.

Jackrabbot project team

The JackRabbot project team with JackRabbot 2 (from left to right): Patrick Goebel, Noriaki Hirose, Tin Tin Wisniewski, Amir Sadeghian, Alan Federman, Silvio Savarese, Roberto Martín-Martín, Pin Pin Tea-mangkornpan and Ashwini Pokle (Image credit: Amanda Law)

JackRabbot 2 is packed with sensors to help it navigate the world. These include multiple depth-sensing and stereo cameras, GPS and three LIDAR sensors, which send and detect pulsed laser light to define the surrounding environment in three dimensions.

The researchers estimate they will need at least 24 hours of data in various environments – indoors and outdoors during different times of day, in crowds of varying density and configuration – to teach the artificially intelligent algorithm that will allow JackRabbot 2 to navigate autonomously with humanlike etiquette. The algorithm is designed so that it could work with other, similar robots as well.

“There are many behaviors that we humans subconsciously follow – when I’m walking through crowds, I maintain personal distance or, if I’m talking with you, someone wouldn’t go between us and interrupt,” said Ashwini Pokle, a graduate student in the Stanford Vision and Learning Lab. “We’re working on these deep learning algorithms so that the robot can adapt these behaviors and be more polite to people.”

The data collection process includes training the algorithm on pre-existing videos featuring all kinds of modes of transportation: walkers, skateboarders, bicyclists and people on scooters. The group will also crowdsource data from a video game where people maneuver a simulation of JackRabbot 2 through online environments. And for some real-world exposure, the researchers take JackRabbot 2 out for walks, navigating it manually in the way they want it to someday move itself.

“Every time we go around campus or outside, we want to collect all of the data that the robot perceives through its sensors,” said Roberto Martín-Martín, a postdoctoral fellow in the Stanford Vision and Learning Lab. “We train the robot to understand these data and infer things like ‘Where are people in these images?’ or ‘How do I move if I receive this sensor data?’”

Both JackRabbots are intentionally cute – and named for Stanford’s resident jackrabbits. Since these robots are designed to move among us without supervision, the researchers wanted to make people comfortable around them. They did such a good job of making them approachable, however, that the robots have also had to learn what an in-coming hug looks like and how to pause for selfies.

The last-mile problem

Although the JackRabbot project may be a step toward personal robots, the most immediate application is the “last-mile problem.” This is the attempt to bridge the gap between large-scale delivery hubs and individual businesses or homes – an environment that is drastically different and less defined by rules and boundaries than roads where autonomous cars roam.

One day, JackRabbot descendants may be a regular sight, grabbing packages from an autonomous delivery truck and bringing them to your front door. At Stanford, this may be an imminent reality as the researchers are hoping to have JackRabbot 2 try out its delivery-bot skills on campus.

Current members of the JackRabbot team include Michael Abbot, Max Austin Chang, Kevin Chen, Vincent Chow, Alan Federman, Patrick Goebel, JunYoung Gwak, Noriaki Hirose, Vineet Kosaraju, Richard Martinez, Roberto Martín-Martín, Ashwini Pokle, Seyed Hamid Rezatofighi, Amir Sadeghian, Silvio Savarese, Pin Pin Tea-mangkornpan, Nathan Tsoi, Marynel Vázquez, Tin Tin Wisniewski and Xiaoxue Zang.

Savarese is also a member of the Stanford Neurosciences Institute.