Mind-Reading Neural Network Uses Brain Waves to Recreate Human Thoughts

The work can have significant applications in cognitive disorder treatment and post-stroke rehabilitation.

It has long been the stuff of science fiction but now mind-reading machines may actually be here and they may not be invasive. Researchers from the Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person’s brain activity as actual images without the use of invasive brain implants.

RELATED: AI POWERED MIND-READING TECH COULD SOON READ YOUR MIND

Studying the brain in real-time

The work has the potential to enable new non-invasive post-stroke rehabilitation devices controlled by brain signals as well as novel cognitive disorder treatments. In order to do achieve such applications, neurobiologists need to understand how the brain encodes information by studying it in real-time such as when a person is watching a video.

This is where the new brain-computer interface developed by the researchers comes in. Using artificial neural networks and electroencephalography, or EEG, a technique for recording brain waves via electrodes placed noninvasively on the scalp, the team was able to visualize what test subjects were looking at in videos in real-time.

“We’re working on the Assistive Technologies project of Neuronet of the National Technology Initiative, which focuses on the brain-computer interface that enables post-stroke patients to control an arm exoskeleton for neurorehabilitation purposes, or paralyzed patients to drive, for example, an electric wheelchair. The ultimate goal is to increase the accuracy of neural control for healthy individuals, too,” said Vladimir Konyshev, who heads the Neurorobotics Lab at MIPT.

A two-step experiment

In the experiment, neurobiologists first asked subjects to watch YouTube video fragments from five arbitrary video categories while EEG data was collected. The EEG data showed that the brain wave patterns were distinct for each category of videos, enabling the team to analyze the brain’s response to videos in real-time.

In the second phase of the experiment, the researchers developed two neural networks. The first was for generating random category-specific images from “noise." The second was to generate similar “noise” from EEG. The networks were then combined so that EEG signals could be turned into actual images.

To test the new system the subjects were shown previously unseen videos while EEGs were recorded and fed to the neural networks. The system produced images that could be easily categorized in 90% of the cases.

“The electroencephalogram is a collection of brain signals recorded from scalp. Researchers used to think that studying brain processes via EEG is like figuring out the internal structure of a steam engine by analyzing the smoke left behind by a steam train,” explained paper co-author Grigory Rashkov, a junior researcher at MIPT and a programmer at Neurobotics. “We did not expect that it contains sufficient information to even partially reconstruct an image observed by a person. Yet it turned out to be quite possible.”

“What’s more, we can use this as the basis for a brain-computer interface operating in real time. It’s fairly reassuring. Under present-day technology, the invasive neural interfaces envisioned by Elon Musk face the challenges of complex surgery and rapid deterioration due to natural processes — they oxidize and fail within several months. We hope we can eventually design more affordable neural interfaces that do not require implantation,” the researcher added.

Advertisement

The study was published as a preprint on bioRxiv and there is a video online showing the system at work.

Advertisement

Stay on top of the latest engineering news

Just enter your email and we’ll take care of the rest: