The Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Toronto have released a paper demonstrating a new system called VirtualHome that has the potential to teach robots to execute household chores. The system features a 3D virtual world where artificial agents undertake up to 1,000 tasks assigned to them.
3,000 activities incorporated
To create VirtualHome, the researchers incorporated in the system nearly 3,000 programs of activities, complete with their corresponding broken down subtasks. The team then devised a way to illustrate the system through a 3D Sims-like world where artificial agents can be seen executing these activities in eight different rooms of a house.
The premise behind the project is the fact that robots require explicit instructions to complete even the most simple of tasks. For instance, an instruction of “turn off the light” would require additional inputs or subtasks such as “spot the light switch,” “walk to the light switch” and “press the light switch.”
“Describing actions as computer programs has the advantage of providing clear and unambiguous descriptions of all the steps needed to complete a task,” explained MIT PhD student Xavier Puig and lead author on the paper. “These programs can instruct a robot or a virtual character, and can also be used as a representation for complex tasks with simpler actions.”
Instructions turned to codes
The researchers created these robot-suitable instructions by turning verbal descriptions of household chores into codes. These codes were combined into programs, each representing a more complex action, and fed into the VirtualHome 3D simulator.
The video's virtual agents could then be seen on screen undertaking the given program tasks. This new database of robot instructions could one day be incorporated into Alexa-like robotic systems to improve the machines' abilities to incorporate and execute new tasks.
“This line of work could facilitate true robotic personal assistants in the future,” said Qiao Wang, a research assistant in arts, media, and engineering at Arizona State University. “Instead of each task programmed by the manufacturer, the robot can learn tasks just by listening to or watching the specific person it accompanies. This allows the robot to do tasks in a personalized way, or even some day invoke an emotional connection as a result of this personalized learning process.”
This isn't the first time researchers have successfully trained robots at performing human-like tasks. In 2015, UC Berkeley developed algorithms that gave robots the ability to learn motor tasks.
Their work, a form of reinforcement learning, was welcomed as a major milestone in the field of artificial intelligence. The team named their experiment BRETT, Berkeley Robot for the Elimination of Tedious Tasks, and had him complete a variety of chores.
Currently, the Allen Institute for Artificial Intelligence is also working on a robot-teaching virtual environment called Thor. This system defines objects, their corresponding uses and the actions robot can undertake with them so that the machines learn to complete tasks through trial and error.
Via: MIT News