This dexterous robot hand can operate in complete darkness
There have been significant advancements in robot manipulation, which has translated into higher levels of dexterity than previously possible. However, sensorimotor robot control, which consists of actively using sensor readings to modify robot actions, is still complex. In some cases, even though the robot has secured an object, a random action can easily lead it to drop it.
A team of engineers at Columbia University, in a bid to create a genuinely dexterous robot hand, have designed a prototype that doesn’t rely on vision to manipulate objects, meaning it can operate in the dark. The study, which has yet not been published, aims to manipulate more complex objects while securing them at all times without relying on support surfaces and to reorientate the grasped object with purely intrinsic sensing of the hand robot.
Replicating the human hand
This task is a difficult feat because it requires constant repositioning of the fingers to keep the object stable. The researchers found that the robot hand could perform this task based solely on touch sensing. The team also found that the camera-less robot hand didn’t have to rely on light or vision to complete its tasks. This means the robot can operate under little to no light and even in complete darkness.
“While our demonstration was on a proof-of-concept task, meant to illustrate the capabilities of the hand, we believe that this level of dexterity will open up entirely new applications for robotic manipulation in the real world,” said Matei Ciocarlie, associate professor at Columbia. “Some of the more immediate uses might be in logistics and material handling, helping ease up supply chain problems like the ones that have plagued our economy in recent years, and in advanced manufacturing and assembly in factories.”
The robot hand has five fingers and 15 independently actuated joints. Each of these fingers is equipped with touch-sensing technology, which the team developed in-house. To test the capabilities of the robot hand, the team used a method called reinforcement learning, which involves endowing the robots with the ability to adapt, improve and reproduce tasks with changing constraints based on exploration and autonomous learning.
The ultimate goal of the team at Columbia is to combine the capability of this dexterous robot hand with abstract, semantic, and embodied intelligence. The team believes large language models such as OpenAI’s GPT4 or Google’s PALM may provide semantic intelligence.