Scientists say autonomous robots should think more like bugs. Here's why
Are the researchers and engineers who are building artificially intelligent robots focusing too much on processing power? Are they taking inspiration from the wrong forms of natural intelligence?
Some of their colleagues think so. In a review article published June 15 in the peer-reviewed journal Science Robotics, a team of researchers writes that as robots become smaller and smaller (and as the end of Moore's law comes closer), it's going to become more difficult to meet the demand for robots that can accomplish increasingly challenging tasks without human help. In the language of robotics, "available onboard computing capabilities and algorithms represent a considerable obstacle to reaching higher levels of autonomy."
Their solution? Look to nature for inspiration. Specifically, find ways to emulate how groups of insects — each with a tiny, very limited brain and body — can accomplish impressive feats that even the most sophisticated robots could never pull off. Interesting Engineering recently sat down with roboticist Dr. G.C.H.E. (Guido) de Croon, at Dutch university TU Delft, one of the paper's co-authors, to talk about autonomous robots, insect intelligence, and why processing power isn't everything.
This interview has been edited for length and clarity.
Interesting Engineering: How did it occur to you that insects might make for a good source of inspiration in designing autonomous robots?
G.C.H.E. (Guido) de Croon: My background education is in artificial intelligence. I got really enthusiastic when there was a professor at my university teaching about embodied artificial intelligence, emphasizing that in animals and nature, intelligence is not only in the brain — it's also in the intricate interplay between the body, the sensors, and behavior. Of course, what goes on in the brain is also important.
I was already looking at artificial intelligence for robots at the time. Even at that time, these robots were focusing on building highly detailed 3D maps of the environment, which is still the case for self-driving cars. That kind of intelligence is different than insects. They don't navigate by building huge, 3D, highly detailed models. They do that in a much more efficient way, and this really fascinated me.
IE: There are many kinds of robots. What are you and your co-authors talking specifically about in the new paper?
de Croon: When some people think of robots, they think of humanoids: a robot that looks and walks like a human. But I think more about small robots — small flying robots, small walking robots, small driving robots — with "small" being anywhere from 30 centimeters (1 foot) in diameter to truly insect-sized in the future.
When you have these small robots, you want them to do something useful. And then very quickly, you already think "okay, there have to be multiple of them doing something together." And in order to be useful, they have to do it by themselves. So how are we going to make small flying drones, for example, that fly by themselves? This is very challenging because they cannot bring the amount of processing and sensing that self-driving cars have, so we need to take a completely different angle. This is what the paper is about, to say "hey, people, we should be looking at insect intelligence for this."
IE: Can we drill down further on the robot types? What are some use cases connected with these small robots?
de Croon: For example, I'm currently working on drones in greenhouses. So these drones are, let's say, 30 centimeters in diameter. They fly autonomously in a greenhouse and take pictures of the crop. Because the greenhouse sector is very big in the Netherlands, it's very important to detect diseases and pests at a very early stage. Otherwise, you have to eliminate lots of plants. These drones are going to help with that. But they can also help with the step to precision agriculture, where they really see what each plant needs — "this one needs more water" — and things like that.
If you think of other types, you can think of small walking robots that pick up litter in a park or something to keep it clean or little boats that help clean up the plastic litter [in water]. If we think further into the future, you can even think about tiny drones pollinating.
IE: What does it mean for a robot to think more like an insect?
de Croon: The argument of the paper is that to make small robots autonomous, we need to look at the type of intelligence that the insects have. We analyzed what characterizes this intelligence and found a nice word to describe it: parsimony. They are very efficient with their resources. It's expressed in different elements.
One of the things that insects — and animals in general — do is take actions to simplify tasks. One of the things is that they induce motion a lot. That makes it easier for them to see depth and distances.
IE: Meaning they collect more information about the environment by moving their body to change their perspective?
de Croon: Yes, imagine you're on a train and you're watching a train next to you. At some point, things start to move, and you're wondering, "okay, am I moving, or is the other train moving?" If you look the other way and see that the station is stationary, then you know it's the other train. You take a small action to change your perspective.
IE: You also write that insects make a lot of assumptions about their environment. Can you explain what that means?
de Croon: For example, you have these dung beetles. When they take some dung, they kind of assume that nobody's going to steal it. So, if a biologist takes the dung from a beetle, it actually just continues and goes into the nest. It's very efficient because it makes assumptions about the environment. In some cases — like when a biologist is doing an experiment [on dung beetles] — the assumption doesn't work. But this is really a minority case. In general, it's actually very robust. And, most importantly, it allows very small robots to do very complex tasks.
IE: The paper also discusses insect bodies. How can they offer useful examples for designing physical hardware?
de Croon: The body can be designed to simplify tasks. So one of the examples we give is flapping-wing drones. In flying insects, the wing shape changes in flight. At first, roboticists working on insect-sized flapping-wing drones tried to actuate this actively. they found that if you redesign your wing a bit, then the deformations happen passively. This was a lighter, smarter design of the body that allows these insect-sized robots to take off. The body can be designed in many ways to simplify tasks.
IE: It's not just individual robots you're interested in, right? Can you explain how insects' social behavior offers insight into the future of small, autonomous robots?
de Croon: They work together in swarms. Not all insects, but social insects — like honey bees and ants — tackle complex tasks by working together. Individually, they're very limited. But through this interplay of simple behaviors, they can work together to solve complex tasks. For example, ants can find the shortest route to food.
IE: How does this approach you've outlined differ from what other robotics researchers are doing?
de Croon: The mainstream robotics approach focuses much more on the brain, on a human approach to intelligence. For example, making highly-detailed 3D maps, and on the kind of modularity where you just keep adding sensors to the robots, like "oh, yeah, a laser scanner can also work for our drone," or "we put a stereo vision system on a car."
You start adding things, and you want to do more stuff. Then you have to get more computing power. So, in the end, you get very heavy solutions in terms of the weight of the sensors and processors. Lots of calculations have to happen. Basically, we think it's overkill for many of the things that you want small autonomous robots to do.
IE: A lot of those systems are for safety. How can you make sure the parsimonious autonomous robots you're calling for don't inadvertently hurt people or cause other kinds of harm?
de Croon: We approach it a bit from a physical standpoint. It has to be safe even if everything goes wrong. If that's your starting point, then AI is not a problem there. The efficiency of the solutions comes with a certain simplicity as well. You can also call it elegance.
This is one of the reasons that we have been focusing on small drones. If you limit the size and weight of drones, they can become inherently safe, even if everything goes wrong. They cannot really hurt a person because the weight is already too low. Now, as soon as you have rotors, it's also a good idea to have something like prop guards.
We also work on flapping-wing drones that are actually very flexible. One of the co-authors works on insect-sized flapping wings, so it's extremely safe. Besides that, you need to think about velocity. Typically, we have them move not too fast because even if something is light, if it's moving at super-high speeds, then it can still be dangerous.
Simplicity also makes it more robust because you have a very good idea of what your system is doing. It's not like in other approaches where you may have tens of elements, each of which is very complex. If something goes wrong, it may be some interplay that you didn't foresee between all these things. I think that if you go for insect-inspired AI, it can be much easier to understand and guarantee robustness.
Researchers at Cedars-Sinai hospital in California have used single-neuron recording to discover two types of brain cells that establish boundaries between chunks of memory.