Extremely small robots could one day do everything from helping surgeons operate on patients to keeping factories running smoothly. But designing machines that can hardly be seen with the naked eye has proven extraordinarily difficult.
That’s why it’s big news that a team of researchers has taken inspiration from the semiconductor industry to design a new kind of micro-robot. The groundbreaking design allows an operator to use a tightly focused beam of visible light to control the bot. The light causes materials in and around the leg joints to expand. Taking the light away causes contraction. This simple means of control can send these tiny robots — less than a millimeter in diameter — walking, crawling, and scuttling across a surface.
The technology is presented for the first time Wednesday in a paper published in the peer-reviewed journal Science Robotics.
Interesting Engineering caught up with co-author John Rogers. The robotic engineer is professor of Materials Science and Engineering, Biomedical Engineering, and Neurological Surgery at Northwestern University, a recipient of the MacArthur “genius grant," and a member of the National Academy of Science and the National Academy of Engineering. He explained how the new invention works and why building tiny robots requires overcoming big problems.
This interview has been edited for length and clarity.
Interesting Engineering: What’s the state of the micro-robotics field today?
John Rogers: There's a growing research interest across various academic and startup laboratories around very small-scale robots that can be controlled remotely. One of the long-term aspirations for research in this area is patient care, such as advanced surgical or diagnostic tools that can be operated in a minimally invasive manner. You might also imagine various industrial applications, such as small-scale machines for assembly, repair, and maintenance of structures that are difficult to access.
IE: How is your group pushing the technology forward in terms of making the robots themselves?
JR: I run a group whose core expertise is in material science and microfabrication. We have all sorts of unique capabilities in depositing and patterning thin films of materials, much in the same way that companies in the electronics industry form integrated-circuit chips. We combine those very sophisticated, well-established methods with a scheme that is conceptually similar to a children's pop-up book. That allows us to geometrically transform flat, planar structures into complex 3D architectures. That’s what we use to define the bodies, the skeletal structures, and the muscles of the robot.
IE: How does the pop-up type manufacturing work?
JR: We start out with these integrated circuit-style methods for forming thin, multilayer films of materials that we then pattern into flat, 2D geometries. Then we remove those patterned thin film structures from the underlying substrate support and transfer them physically to a stretched piece of rubber. It's a little more sophisticated than that, but basically, it’s stretched out, like a drum head.
Then we bond those flat-pattern, thin film structures to that stretched piece of rubber, such that when we relaxed the stretch, it compresses the flat pattern structure. That causes the flat, 2D structure to buckle up and adopt the complex 3D geometry. We specify the exact 3D shape by patterning the 2D precursor structure and stretching that rubber substrate in certain ways. Using this method, we can create robots that look just like crabs, inchworms, or crickets — different kinds of things. That strategy for building 3D structures is unique to our group.
IE: What about locomotion? How do these robots move?
We've been able to come up with a scheme that is, I think, unique. It exploits a class of material known as a shape memory alloy, which is a particular metal alloy whose unique defining characteristic is an ability to change phase upon heating. That allows it to transform a deformed configuration into a previously defined shape.
That mechanism serves as the muscles in our robot, located strategically at the joints of the legs. The shape memory effect is supported by a very thin layer of glass that we deposit onto these robots as a skeleton. It's the balance of that elastic restoring force with this memory effect that allows us to move the legs back and forth and establish a walking gait, a jumping behavior, or kind of an inchworm-type mode of locomotion.
IE: How are you able to control that mechanism remotely?
JR: It's remote control in the sense that we are causing the robot to move in programmed directions and with programmed speeds without any direct physical contact. It is not remote control in the sense that a remote control car operates. We're affecting the control with visible light rather than radiofrequency waves.
We use a light source to illuminate these robotic structures at different locations across their body in a timed program sequence. When the light hits these shape memory alloys, some of it is absorbed. That causes a small amount of heating, which causes the corresponding part of the robot to physically move. When the light is eliminated, the joint quickly cools. As it cools, the skeletal structure elastically restores the limb to its original position and geometry.
If you do that over and over again, you can cause a leg to move back and forth, and you can move the left legs before the right legs for example, and then that causes a left-to-right motion. The way that we scan the light across the body of the robot determines the direction and the speed of its motion.
IE: How are these robots controlled when they're inside an enclosed space?
JR: These robots are not going to be applicable to every use scenario. There will be circumstances where this mechanism just is not going to work. I wouldn't want to claim otherwise. But, you know, if you are in a confined space, you might imagine a fiber-optic light delivery scheme, and there may be different ways. You do have to have optical access — either direct line of sight access or something that can be addressed with a waveguide. I don't want to do anything that would overclaim what we've been able to accomplish. I think it's something that hasn't been done before, but it's not without limitations.
IE: What have been some of the biggest challenges up to this point, in terms of engineering?
JR: Just conceiving of this method of actuation required some insight and some creative thought. Optimizing the way we're creating 3D structures involves a number of different challenges. One was figuring out how you get these legs to push off from a solid surface in a way that doesn't just cause a robot to wiggle back and forth. We had to sort of structure and add claws to the feet so that they could push off in one consistent direction.
That may seem like a subtle thing, but if you don't do it properly, then you actuate the crab and it just wiggles back and forth. Thinking about the nature of the forces and the interaction between the legs of the robots and the solid surfaces that they're sitting on, required some careful attention.
IE: What sort of forces do you have to confront when dealing with these extremely small robots?
JR: As things get smaller and smaller, they tend to get stickier and stickier. For example, if you have a really tiny dust particle sitting on your desk, you could blow on it very hard, and it won't budge because it's stuck there due to Van der Waals forces. These are generalized adhesion forces that exist between any two solid objects, almost independent of the chemistry.
As terrestrial robots get smaller and smaller, you really have to think about sticky feet. You need really strong mechanical actuators. It is a consequence of the physics you just have to live with. But the fact that the insect world seems to navigate pretty effectively at these scales is a proof of concept that it should be possible, but it is something you have to grapple with as a robotic engineer.
IE: How did you overcome the problem of "sticky feet" with these robots?
JR: It's a matter of engineering the feet. These claw structures drive the locomotion, and they're also managing that stiction effect. The robot bodies we're talking about here are half-millimeter to maybe the diameter of a human hair. The stiction effects are not overwhelming at that scale. But if you reduce the size by another factor of 10, then you're talking about a pretty daunting situation where our current approaches might not be the solution. It could be that we need a new idea for those.