Here is a thing you probably haven't thought of before: How do robots really see transparent and reflective objects? Well, trick question — they actually don't really see them properly which is why they can't grasp kitchen stables such as a shiny knife.
However, roboticists at Carnegie Mellon University have had success with a technique they've developed for teaching robots to pick up such objects.
Their newly found technique doesn't demand fancy sensors, exhaustive training, or human guidance. It relies on one thing only: a color camera.
Using machine learning to grab shiny objects
CMU scientists developed a color camera system that can identify shapes based on color and trained it to imitate the depth system and essentially assume shape to grasp objects. In order to do this, they used depth camera images of opaque objects next to color images of those same objects.
When they got that right, the color camera system was adapted to transparent and shiny objects, which the system proved to be extremely successful at grasping.
It sometimes misses, but for the most part, it can do the job
David Held, an assistant professor at CMU's Robotics Institute, said, "We do sometimes miss, but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects."
While the system wasn't foolproof, the multimodel transfer learning used to train the system was so effective that it was almost as good as the depth camera system at grasping opaque objects.
Can grasp objects in cluttered piles
Thomas Weng, a Ph.D. student in robotics, said, "Our system not only can pick up individual transparent and reflective objects, but it can also grasp such objects in cluttered piles."
This is such a breakthrough since other efforts for doing the same thing relied on training systems that made 800,000 or so attempts to teach basically the same thing.
The novel system will be presented during this summer's International Conference on Robotics and Automation virtual conference.