This Robot Can Learn Like a Baby and Predict Its Future
Researchers from UC Berkeley have created a robot that uses play to teach itself about objects. Based on the way that babies and toddlers learn by experimenting with objects to start to understand the way that they can be manipulated, the robot learns about the objects ‘from scratch’ and builds its understanding of the world around it through experimentation rather than datasets.
The technology, called Vestri by its creators at the Department of Electrical Engineering and Computer Science, is able to visualize the consequences of its future actions. It uses a technology called 'visual foresight' that gives it the ability to be able to manipulate objects it hasn’t experienced before. The clever robot can even avoid objects that might be in the way of their play objectives. UC Berkeley assistant professor Sergey Levine explains the technology: “In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it. This can enable intelligent planning of highly flexible skills in complex real-world situations.”
Vestri combines two types of AI learning
The project makes a big step forward in the field of ‘robot education’. It combines two technologies integral to the Artificial Intelligence industry: reinforcement learning and deep learning.
[see-also]
Reinforcement learning educates robots by getting them to repeat tasks over and over to improve their approach. Deep learning uses neural network technology to give robots information about how to understand its world and tasks. While both of these types of technologies are used extensively in robotics, combining them in this manner is a new approach. Carlos Guestrin, the chief executive officer at AI startup Dato and a professor of machine learning at the University of Washington describes the work at Berkeley, saying: “That’s been the holy grail of robotics.”
Robots can learn to learn from children
While the robot and its systems are still in development, the hard working Berkeley team are well on the way to creating something that has both high intelligence and flexibility. “Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” Levine said. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”
Scientists aim to teach robots tenacity
One big hurdle for the team is to develop ways for the robot to become better at coping with failure. Robots have a tendency to stop and shut down when faced with a task beyond their understanding. But the childlike determination of wanting to learn new skills needs to somehow be imbued into the machines. The team plans to start introducing more complex types of play to the robot such as picking-up and placing down objects and manipulating soft objects. The research is being presented at the Neural Information Processing Systems conference in Long Beach, California this week.
Via: UC Berkeley
The Hybrid Observatory for Earth-like Exoplanets (HOEE) would convert the largest ground-based telescopes into the most powerful planet finders yet.