Worm-inspired liquid neural network grants drone autonomy

"That was the first time I thought, ‘this actually might be pretty powerful stuff’. It was pretty impressive to me."
Amal Jos Chacko
An illustration of a drone.jpg
An illustration of a drone.


Haven’t we often heard of artists and innovators looking to nature for inspiration? Everyday, mundane elements that instill creativity? Researchers at the Massachusetts Institute of Technology have now taken a leaf from nature’s book to train drones.

Liquid Neural Networks (LNN) are a newer iteration of neural networks. Inspired by organic brains, they contain a set of neurons connected by synapses and can continuously adapt to new data inputs. 

When presented with new data, synapses that bind the neurons processing this data strengthen, thereby improving the network. 

The team wanted to evaluate how an LNN-based architecture could learn from high-dimensional, unstructured, and unlabeled data; and how knowledge thus gained could be transferred when in unchartered territory.

The Caenorhabditis elegans worm seemed to be an ideal inspiration for the LNN. With just 302 neurons and 8,000 synaptic connections, the worm certainly has a smaller brain, especially when you compare it with the brain of a human.

However, liquid neural networks are still in their nascent stage. A smaller brain will allow the team to understand its working and analyze possible improvements.

“We wanted to model the dynamics of neurons, how they perform, how they release information, one neuron to another,” Ramin Hasani, co-author of the study and research affiliate at MIT told Popular Science.

Flying with flying colors

The team performed a series of closed-loop control experiments on the LNN-trained DJI quadcopter drone. These included fly-to-target tasks, stress tests, target rotation, and dynamic target tracking.

The LNN was first taught to identify a red chair, leading the drone to recognize and fly toward the chair from distances as far as 145 feet (45 meters).

“I think that was the first time I thought, ‘this actually might be pretty powerful stuff’,” Makram Chahine, a graduate researcher at MIT and co-author said. 

The team chose four different recurrent neural network architectures to set a baseline.

Test results showed LNN-based architectures to consistently outperform other models, even clocking a high success rate of over 90% in fly-to-target tasks. Further compelling evidence was recorded in range, rotation, and occlusion robustness tests and dynamic target tracking tasks.

Brain-inspired networks showed less drifting from their trajectories than traditional neural networks.

A worm-inspired future?

While it is too early to pinpoint reasons leading to the high accuracy of liquid networks, researchers hypothesize it relates to the ability to understand causality.

“We want to create something that is understandable, controllable. But right now we are far away from that. Everything that we do as a robotics and machine learning lab is [for] all-around safety and deployment of AI in a safe and ethical way in our society, and we really want to stick to this mission and vision that we have,” Hasani says, talking about safety and over-automation concerns.

It might not be tomorrow, but soon enough, we might see drones trained to identify other objects, especially humans. Disaster response could soon see a serious level-up.

Study Abstract

Autonomous robots can learn to perform visual navigation tasks from offline human demonstrations and generalize well to online and unseen scenarios within the same environment they have been trained on. It is challenging for these agents to take a step further and robustly generalize to new environments with drastic scenery changes that they have never encountered. Here, we present a method to create robust flight navigation agents that successfully perform vision-based fly-to-target tasks beyond their training environment under drastic distribution shifts. To this end, we designed an imitation learning framework using liquid neural networks, a brain-inspired class of continuous-time neural models that are causal and adapt to changing conditions. We observed that liquid agents learn to distill the task they are given from visual inputs and drop irrelevant features. Thus, their learned navigation skills transferred to new environments. When compared with several other state-of-the-art deep agents, experiments showed that this level of robustness in decision-making is exclusive to liquid networks, both in their differential equation and closed-form representations.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board