New Virtual Obstacle Courses Are Teaching Real Robots How to Walk

Endless block and stair training gets you somewhere.
Brad Bergan
An army of robots in a simulation.Robotic Systems Lab / Nvidia / YouTube

Send in the robot army.

A virtual army of 4,000 doglike robots was used to train an algorithm capable of enhancing the legwork of real-world robots, according to an initial report from Wired.

And new tricks learned in the simulation could soon see execution in a neighborhood near you.

Robots in a simulation mastered step and block navigation

The simulated army was developed by Switzerland-based ETH Zurich researchers, in addition to engineers at the chip-manufacturing company, Nvidia. Together, they used meandering bots in a simulation called ANYMals to overcome difficult obstacles for robots, like steps, slopes, and sharp drops carved into a virtual landscape. Every time a robot solves a navigational problem, the researchers give it a harder one, nudging the algorithm in a maddeningly unforgiving puzzle whose sole purpose is to teach its digital guest how to surmount the insurmountable, achieving a level of sophistication never seen in AI mobility.

Displayed graphically, the ensuing drama unfolds like an army of confused ants writhing across a gigantic sea of geometric insanity. While undergoing training, the robots mastered the up- and downstairs walk without too much struggle. But slopes threw them for a curve. Few could grasp the essentials of sliding down a slope. But, once the final algorithm was moved to a real-world version of ANYmal, the four-legged doglike robot with sensors equipped in its head and a detachable robot arm successfully navigated blocks and stairs, but had issues working at higher speeds.

A robot army in a negative feedback loop with AI

The researchers don't blame the algorithm. Instead, they think a lack of correspondence between the way the sensors perceive the real world and the virtual one is creating coordination issues. But this kind of fast-track robot learning could accelerate the learning curve for robots and other machines to learn a great abundance of skills, from sewing clothes and harvesting crops to sorting packages in a colossal Amazon facility. The project also reaffirms the significance of using simulation to advance the capabilities of artificial intelligence (AI). "At a high level, very fast simulation is a really great thing to have," said UC Berkeley Professor Pieter Abbeel, in the Wired report. Abbeel is also a cofounder of Covariant, a firm that employes AI in simulations to train robot arms in the art of sorting objects for logistics companies.

And Abbeel thinks the Swiss and Nvidia researchers' work with robotic algorithms "got some nice speed-ups," according to the report. AI has come far, and now it can upgrade robots' ability to perform tasks in our everyday world that isn't easily translated into software. The capacity to clutch awkward, strange, and slippery surfaces, for example, isn't something you can reduce to a few lines of simple code. This is why 4,000 simulated robots trained with reinforcement learning, which is an AI method that takes its cue from the way animals learn, via positive and negative feedback. When robots move their legs, a judging algorithm monitors how this contributes to the robot's capacity to carry on walking, and adjusts the control algorithms to adapt as motion continues. Nvidia's specialized AI chips supported the simulations, enabling the researchers to train the army of robots in one-hundredth of the time it would otherwise require. We've at last arrived at the beginning of self-learning robots, and, by combining reinforcement learning with recent AI advances, the limits of robotic movement may come closer to the limits of the physical world.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board