If you didn’t fall madly in love with Wall-E the robot, you’re probably a soulless monster. The Disney/Pixar robot, famous for his sensitive and polite nature, captured the hearts of millions with his sweet antics and slight hoarding habit.
Researchers with the Massachusetts Institute of Technology managed to their own version of Wall-E. And in a world where people legitimately fear aggressive robotics, this little guy is a breath of fresh air. The robot is programmed to navigate through busy pedestrian environments equipped with an awareness of the social norms and a stellar set of manners. It can operate on either the left or the right of a walkway or street and take one side of a corridor, dependent on the country it’s in.
"Socially aware navigation is a central capability for mobile robots operating in environments that require frequent interactions with pedestrians. For instance, small robots could operate on sidewalks for package and food delivery. Similarly, personal mobility devices could transport people in large, crowded spaces, such as shopping malls, airports, and hospitals," lead researcher Yu Fan Chen told MIT News.
While robots of today can avoid most obstacles without collision, Yu Fan Chen and his team wanted to create a droid that could make human interaction a priority and cater to the rules of the road as laid out by people. Human beings are also unpredictable, and this was an aspect strongly considered when building this particular robot.
Previously attempts at creating aware robots have hit more than a few snags, namely when using trajectory and reactive-based approaches. Usually, a robot programmed with geometric planning or sensor data will perform badly in an unpredictable environment and be perceived as aggressive.
"The part of the field that we thought we needed to innovate on was motion planning," graduate student and fellow researcher Michael Everett said in the interview. "Once you figure out where you are in the world and know how to follow trajectories, which trajectories should you be following?"
How did they do it? The group focused on localization and perception, outfitting the robot with store-bought sensors such as webcams, a depth sensor, and a high-resolution lidar sensor. To tackle localization, they developed open-source algorithms that map the robot’s surroundings enabling it to determine its position. Controlling its movement was managed using the same methods used to drive autonomous ground vehicles.
The team installed reinforcement learning into their robot, using computer simulations to teach it to take paths dependent on the speed and trajectory of other objects in its way. The bot can also assess its environment every one-tenth of a second.
“We just look at what we see, choose a velocity, do that for a tenth of a second, then look at the world again, choose another velocity, and go again. This way, we think our robot looks more natural and is anticipating what people are doing,” said Everett.
The robot was tested in the busy hallways of MIT’s and was able to drive on its own for 20 minutes at a time, avoiding collisions. The team’s research is due to be presented at the IEEE Conference on Intelligent Robots and Systems.