Neuromorphic computing could lead to self-learning machines

German scientists present a method by which AI could be trained much more efficiently.
Sejal Sharma
Representational image
Representational image


In the last couple of years, research institutions have been working on finding new concepts of how computers can process data in the future. One of these concepts is known as neuromorphic computing. Neuromorphic computing models may sound similar to artificial neural networks but have little to do with them. 

Compared to traditional artificial intelligence algorithms, which require significant amounts of data to be trained on before they can be effective, neuromorphic computing systems can learn and adapt on the fly.

With the explosive growth taking place in the machine learning sphere, researchers from Germany have devised an efficient training method for neuromorphic computers.

A self-learning physical machine

"We have developed the concept of a self-learning physical machine," explains Florian Marquardt, a scientist at the Max Planck Institute for the Science of Light in Erlangen, Germany. "The core idea is to carry out the training in the form of a physical process, in which the parameters of the machine are optimized by the process itself."

As in the case of training conventional artificial neural networks, external feedback is required to improve the working of the model. However, a self-learning physical machine, which the team of researchers proposes, makes the training much more efficient and saves energy.

"Our method works regardless of which physical process takes place in the self-learning machine, and we do not even need to know the exact process," explains Marquardt. "However, the process must fulfill a few conditions. Most importantly, it must be reversible, meaning it must be able to run forwards or backwards with a minimum of energy loss."

Based on neuromorphic architecture

A neuromorphic architecture is the opposite of a von Neumann architecture, on which most of our hardware today is based. “What is more, the von Neumann architecture that is currently employed by electronic devices is known to be highly inefficient for most ML applications,” note the researchers in their study.

The von Neumann architecture separates memory and computing, which means that chips shuttle information back and forth between the CPU and memory. This takes up more time and energy. A neuromorphic architecture is the answer to that.

"We hope to be able to present the first self-learning physical machine in three years," said Marquardt. “We are therefore confident that self-learning physical machines have a strong chance of being used in the further development of artificial intelligence.”

The study was published in the journal Physical Review X.

Study abstract:

A physical self-learning machine can be defined as a nonlinear dynamical system that can be trained on data (similar to artificial neural networks) but where the update of the internal degrees of freedom that serve as learnable parameters happens autonomously. In this way, neither external processing and feedback nor knowledge of (and control of) these internal degrees of freedom is required. We introduce a general scheme for self-learning in any time-reversible Hamiltonian system. It relies on implementing a time-reversal operation and injecting a small error signal on top of the echo dynamics. We show how the physical dynamics itself will then lead to the required gradient update of learnable parameters, independent of the details of the Hamiltonian. We illustrate the training of such a self-learning machine numerically for the case of coupled nonlinear wave fields and other examples.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board