Each day, the average human makes thousands of choices -- especially when it comes to where to go and how to get there. Researchers from MIT are working on helping robots get through those same choices in navigation.
A new motion-planning model and neural network helps robots figure out how to get to their end goal by taking in more of their environment. The algorithm creates a 'tree' of possible decisions that keep branching until a robot finds an optimal path to navigation.
This could mean robots of the future don't have to use as much computing power and can get to us where we want faster and safer than before. A paper detailing the new algorithm was presented at the IEEE/RSJ International Conference on Intelligent Robots and Systems.
Fixing the old ways of robo-navigation
Old algorithms stopped at the branch methodology. They never helped a robot 'learn' from its mistakes. They also don't remember how they acted in previous instances in similar environments. This 'short-term memory' is what the MIT team wanted to fix.
“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute.
“The thousandth time they go through the same crowd is as complicated as the first time," he continued. "They’re always exploring, rarely observing, and never using what’s happened in the past."
That's where the team's neural network comes into play. The network allows the system to guide a robot through crowded or tricky environments and then continues to apply those strategies in other situations.
“When humans interact with the world, we see an object we’ve interacted with before, or are in some location we’ve been to before, so we know how we’re going to act,” says Yen-Ling Kuo, a PhD student in CSAIL and first author on the paper. “The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”
Teaching a robot directions
The researchers had to figure out a way to test their new model. They needed to build an environment that teaches the model to learn. "That when you’re stuck in an environment, and you see a doorway, it’s probably a good idea to go through the door to get out,” Barbu said.
They developed a simulation known as the "bug trap" where a 2D robot had to escape from an inner chamber via a narrow channel that led to a larger room. Once the 2D robot escaped the first time, the team presented it with another trap.
They then introduced the model to multiple moving elements like at a traffic intersection or a crowded street. In this particular example, the researchers used a roundabout -- something even the best autonomous cars continue to struggle with in times of high traffic.
“Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu says. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”
The model learned how to handle what the other cars were going to do the same way any human would, the researchers explained. While it's terrifying to think of Terminator-like robotics having this high-powered navigation, the researchers say practical safe applications are numerous.
This could one day help autonomous cars, like a Tesla Model S, better handle going through an intersection or merge into traffic. The model could even learn how to handle both aggressive and cautious drivers.