NVIDIA Opens New Research Lab for Teaching Robots to Safely Work Alongside Humans
NVIDIA is opening a new robotics research lab in Seattle to drive breakthrough robotics research mostly in mastering a kitchen. The aim of the lab is to enable the next generation of robots to safely work alongside humans.
RELATED: NVIDIA OPENS UP ITS HOLODECK VR DESIGN SUITE
Fully integrated systems
“In the past, robotics research has focused on small, independent projects rather than fully integrated systems. We’re bringing together a collaborative, interdisciplinary team of experts in robot control and perception, computer vision, human-robot interaction, and deep learning,” said new lab lead Dieter Fox, senior director of robotics research at NVIDIA and professor in the UW Paul G. Allen School of Computer Science and Engineering.
The lab will host close to 50 research scientists, faculty visitors, and student interns where they will investigate robotics in realistic scenarios. The first of these is real kitchen where a mobile manipulator does everything from getting objects from cabinets to helping cook a meal.
The robot builds on NVIDIA’s expertise in photorealistic simulation, using deep learning to detect specific objects solely based on its own simulation. As such, it does not require any tedious manual data labeling.
Cutting-edge technologies
The unique system integrates cutting-edge technologies developed by the lab researchers. These technologies enable the robot to detect objects, track the position of doors and drawers, and grasp and move objects from one spot to another.
The technologies used are: Dense Articulated Real-Time Tracking (DART), a method of using depth cameras to keep track of a robot’s environment, Pose-CNN: 6D Object Pose Estimation, a method for detecting the 6D pose and orientation of objects, Riemannian Motion Policies (RMPs) for Reactive Manipulator Control, a new mathematical framework that combines a library of simple actions into complex behavior, and Physics-based Photorealistic Simulation, realistic simulation environments that model the visual properties of objects as well as the forces and contacts between objects and manipulators.
“We really feel that the time is right to develop the next generation of robots. By pulling together recent advances in perception, control, learning, and simulation, we can help the research community solve some of the world’s greatest challenges,” said Fox.