Maze-Solving Artificial Intelligence Teaches Itself to Take Shortcuts

The team at DeepMind created a human-like response to solving a maze: looking for the quickest shortcut possible to solve a problem.
Shelby Rogers

Most humans naturally look for the shortest route between two points. It saves time, energy, and often headaches to find the speediest and most efficient path from point A to point B. However, that skill is no longer specific to living creatures. A team of engineers developed an artificial intelligence program that learned to look for shortcuts through a complicated maze. 

Maze-Solving Artificial Intelligence Teaches Itself to Take Shortcuts
Source: DeepMind

While the engineers laid the foundation for the AI's shortcut seeking, the program effectively taught itself -- developing structures and methods similar to how humans develop shortcuts in their own problem-solving. 

The study was published in the most recent edition of the journal Nature, and it comes from researchers attached to the DeepMind group. That name should sound familiar to those who are fans of artificial intelligence. DeepMind is the British AI company responsible for AlphaGo -- the self-taught computer system that's bested some of the world's best Go players. 

[see-also]

This particular study means a bit more for AI than excelling at a game. DeepMind researchers discovered that when they trained the AI to move through a maze, it spontaneously spurred an electrical activity similar to what's found in the human brain. In humans, this activity takes place in what are called 'grid cells.' (The identification of those cells led to a Nobel Prize several years ago.) This breakthrough could lead to a growing potential for AI systems to act considerably more 'human.'

Most Popular

“It is doing the kinds of things that animals do and that is to take direct routes wherever possible and shortcuts when they are available,” said Dharshan Kumaran, a senior researcher at DeepMind. “With the grid cells, its performance is markedly enhanced to the point that it surpasses an expert human player.”

Johns Hopkins University neuroscientist Francesco Savelli explained more about the AI 'brain' and its architecture. Savelli was not involved in this particular paper, but he has extensive knowledge of the AI systems. Those systems don't quite have what it takes to emulate the diversity of real neurons, Savelli told Phys.org in an interview

"Most of the learning is thought to occur with the strengthening and weakening of these synapses," Savelli said in an interview, talking about the connections between neurons. "And that's true of these AI systems too—but exactly how you do it, and the rules that govern that kind of learning, might be very different in the brain and in these systems."

Humans (and most other animals) don't have a problem moving around thanks to grid cells. Those cells tell the body exactly where it is and where it's headed. The DeepMind researchers wondered if they could develop an AI that could replicate that process. They used rats looking for food in a maze in order to train the AI's network. The team even fed the system data about how the rat moved and how fast it was moving in addition to all the directional information about its paths. 

The team noticed that the simulated rodent controlled by the AI developed those grid cell-like activities -- despite never putting grid cells into the program's training. 

"The emergence of grid-like units is an impressive example of deep learning doing what it does best: inventing an original, often unpredicted internal representation to help solve a task," Savelli and fellow researcher James Knierim said in a commentary on the DeepMind paper.

message circleSHOW COMMENT (1)chevron