Like a human? Artificial neural networks need to sleep to learn better

It's time to go to bed for artificial neurons.
Nergis Firtina
Artificial neural networks stock image.
Artificial neuron network

Artystarty/iStock 

According to a recent study by the University of California, San Diego, neural networks can imitate the sleep patterns of the human brain in order to tackle catastrophic forgetting. 

"The brain is very busy when we sleep, repeating what we have learned during the day," said Maxim Bazhenov, Ph.D., professor of medicine and a sleep researcher at the University of California San Diego School of Medicine in the press release. "Sleep helps reorganize memories and presents them in the most efficient way."

Sleep strengthens rational memory, the capacity to recall arbitrary or illogical associations between objects, people, or events, and guards against forgetting previous memories, according to research by Bazhenov and colleagues.

Like a human? Artificial neural networks need to sleep to learn better
In this representation of memories, sleep represents a period when the human brain can consolidate old memories with new memories, without loss of learning.

They fail from time to time

Although artificial neurons work faster than the human brain, sometimes even like a computer, it is obvious that it needs rest.

"In contrast, the human brain learns continuously and incorporates new data into existing knowledge," said Bazhenov, "and it typically learns best when new training is interleaved with periods of sleep for memory consolidation."

Bazhenov, a senior author, and colleagues talk about how biological models might lessen the danger of catastrophic forgetting in artificial neural networks, increasing their usefulness across a range of research areas.

Get some sleep not to forget

The researchers employed spiking neural networks, which artificially imitate natural neural systems by transmitting information as discrete events (spikes) at specific times rather than continuously.

They discovered that catastrophic forgetting was reduced when the spiking networks were trained on a new task but with sporadic off-line intervals that mirrored sleep. According to the study's authors, the networks may replay previous memories while sleeping, just like the human brain, without explicitly requiring prior training data.

"It meant that these networks could learn continuously, like humans or animals. Understanding how the human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory. 

Artificial neurons and the human brain

The human brain consists of approximately 85 billion cells called neurons, weighing an average of 1250-1500 grams. The brain is an organ that has two hemispheres (cortex), and the two cortices of the brain are separate from each other. As these cortices are physically different, their functions are also different. Compared to a computer, the right cortex of the brain works like a parallel processor. The left cortex, on the other hand, works like a serial processor.

Artificial neuron studies generally aim to develop artificial instructions similar to these by analyzing human thinking methods.

The study was published in PLOS Computational Biology on November 18.

Study abstract:

Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously, and typically learns best when new training is interleaved with periods of sleep for memory consolidation. Here we used spiking network to study mechanisms behind catastrophic forgetting and the role of sleep in preventing it. The network could be trained to learn a complex foraging task but exhibited catastrophic forgetting when trained sequentially on different tasks. In synaptic weight space, new task training moved the synaptic weight configuration away from the manifold representing old task leading to forgetting. Interleaving new task training with periods of off-line reactivation, mimicking biological sleep, mitigated catastrophic forgetting by constraining the network synaptic weight state to the previously learned manifold, while allowing the weight configuration to converge towards the intersection of the manifolds representing old and new tasks. The study reveals a possible strategy of synaptic weights dynamics the brain applies during sleep to prevent forgetting and optimize learning.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board