New Deep-Learning System Can Train Robots to Learn Skills by Watching Humans

A new deep-learning system developed by Nvidia's robotics lab will help robots and humans work and live together.
Jessica Miley

Nvidia is training robots to learn new skills by observing humans. Initial experiments with the process have seen a Baxter robot learn to pick up and move colored boxes and a toy car in a lab environment. 

The researchers hope the development of the new deep-learning based system will go some way to train robots to work alongside humans in both manufacturing and home settings. “In the manufacturing environment, robots are really good at repeatedly executing the same trajectory over and over again, but they don’t adapt to changes in the environment, and they don’t learn their tasks,” Nvidia principal research scientist Stan Birchfield told VentureBeat.

“So to repurpose a robot to execute a new task, you have to bring in an expert to reprogram the robot at a fairly low level, and it’s an expensive operation. What we’re interested in doing is making it easier for a non-expert user to teach a robot a new task by simply showing it what to do.” 

The researchers trained a sequence of neural networks to perform duties associated with perception, program generation, and program execution. The result was that the robot was able to learn a new task from a single demonstration in the real world. 


Once the robot witnesses the task, it generates a human-readable description of the states required to complete the task. A human can then correct the steps if necessary before execution on the real robot. 

“There’s sort of a paradigm shift happening in the robotics community now,” Birchfield said. “We’re at the point now where we can use GPUs to generate essentially a limitless amount of pre-labeled data — essentially for free — to develop and test algorithms. And this is potentially going to allow us to develop these robotics systems that need to learn how to interact with the world around them in ways that scale better and are safer.” 

Most Popular

In a video released by the researchers, human operator shows a pair of stacks of cubes to the robot. The system then understands an appropriate program and correctly places the cubes in the correct order. 

It is able to recover from mistakes as it takes into account any real-world changes it might encounter. The research is happening as part of the Nvidia robotics research lab. Nvidia announced the lab's creation last year and now has six employees working from out of a lab close to the University of Washington in Seattle. 

The research will be presented at the International Conference on Robotics and Automation (ICRA) taking place this week in Brisbane, Australia.

Via: Nvidia

message circleSHOW COMMENT (1)chevron