MIT reveals a new type of faster AI algorithm for solving a complex equation

Researchers solved a differential equation behind the interaction of two neurons through synapses, creating a faster AI algorithm.
Brittney Grimes
Conceptual illustration of artificial neuron.
Conceptual illustration of artificial neuron.

imaginima/iStock 

Artificial intelligence uses a technique called artificial neural networks (ANN) to mimic the way a human brain works. A neural network uses input from datasets to “learn” and output its prediction based on the given information.

Recently, researchers from the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Lab (MIT CSAIL), have discovered a quicker way to solve an equation used in the algorithms for ‘liquid’ neural neurons.

Liquid neural neurons

In January 2021, MIT researchers in the U.S. built ‘liquid’ neural neurons, which were inspired by the brain of small species. It is considered ‘liquid’ because the algorithm can adjust to changes experienced by real-world systems, by changing the equations as they receive new data. In other words, the algorithms can become fluid like water, and adjust itself to change as liquid adjusts itself to the object it’s in.

The flexibility of the ‘liquid’ neural nets created better decision-making estimates for various tasks that required sequential data.  “This is a way forward for the future of robot control, natural language processing, video processing — any form of time series data processing,” said Dr. Ramin Hasani, a research affiliate at CSAIL and the lead author from last year’s study. “The potential is really significant.”

The research team noticed that the models were costly because the number of neurons and synopses required expensive, bulky computer programs to solve the core mathematics needed for the algorithms. The math problems became increasingly more difficult to solve due to the size of the equations, often requiring many computational steps to reach a solution and get an answer.

Creating a faster AI algorithm

The researchers who first created the ‘liquid neurons’ a year ago have discovered a way to lessen the complexities of the bottleneck by solving the differential equations behind the interaction of two neurons through synapses. Differential equations allow for calculating the state of the world or a phenomenon within time, as it evolves step-by-step, not just from start to finish.

This allowed them to unlock a new type of faster artificial intelligence algorithm. The modes have the same characteristics as liquid neural nets, since they are flexible, fundamental, and explainable, but the innovative factor is that they are much quicker and scalable.  The liquid neural net is the novel form of neural network that can adapt its behavior after it “learns” information from input data.

 

The novel network outperformed its counterparts

The new network has been named the “closed-form continuous-time” (CfC) neural network. It has already outperformed various other artificial neural networks in terms of predictions and completing task and has higher speed-ups and performance in recognizing human activities from motion sensors, modeling physical dynamics of a simulated walker robot, and event-based sequential image processing. As for medical predictions, the new prototypes were 220 times faster on sampling 8,000 patients, than their equivalents.

“The new machine-learning models we call ‘CfC’s’ replace the differential equation defining the computation of the neuron with a closed form approximation, preserving the beautiful properties of liquid networks without the need for numerical integration,” said Daniela Rus, the senior author on the new study, professor at MIT and director of MIT CSAIL.

The equation

In order to develop the natural state of passing time within the differential equation, and to comprehend both past and future behavior, the research team used a ‘closed form’ solution that models the description of a whole system in its entirety.

Using this approach, the team could compute the equation at any time in the future or the past, and at a quicker rate than using other equations. With the new model, the rate is much quicker because it doesn’t require step-by-step computations, as in the usual calculations of differential equations.

Example of the novel calculation

The flexibility of the ‘liquid’ neural nets could be used for numerous tasks including weather forecasting heart monitoring, and autonomous driving cars. The researchers used the example of an end-to-end neural network that receives driving input from a camera mounted on a car. The network is trained to create outputs, for example, the car’s steering angle.

The team used liquid neural networks with 19 nodes - connection point in the artificial neurons - in addition to a small perception module to drive the car. A differential equation would describe each node of the system. When using the closed-form solution, it could give the exact behavior since it has a good estimation of the actual dynamics of the system.

Researchers can solve the problem with a lower number of neurons, allowing for faster problem-solving and more cost-efficient outcomes. Therefore , cars can be trained to drive autonomously using input data and liquid neural net, and done so at a quicker pace. The models can also receive input as the events happen in time, which could be used for classification and controlling a car.  

MIT reveals a new type of faster AI algorithm for solving a complex equation
Autonomous cars on the road.

The new way of solving the equation can advance both natural and artificial intelligent systems. “When we have a closed-form description of neurons and synapses’ communication, we can build computational models of brains with billions of cells, a capability that is not possible today due to the high computational complexity of neuroscience models,” Dr. Hasani said about the new paper. “The closed-form equation could facilitate such grand-level simulations and therefore opens new avenues of research for us to understand intelligence”

The paper was published yesterday in the journal Nature Machine Intelligence.

The future of ‘liquid’ neural networks and liquid CfC

There has been evidence of ‘liquid’ CfC models learning tasks in one setting, and having the capability to transfer their skills and capabilities to an entirely new environment without further training. Dr. Hasani explained how neural network systems that use differential equations can be difficult to solve and scale to millions of parameters.

This would require building larger-scale neural networks to solve larger problems. “This framework can help solve more complex machine learning tasks — enabling better representation learning — and should be the basic building blocks of any future embedded intelligence system” he stated.