Researchers Can Now Use Brainwaves to Reconstruct What People See

A team from the University of Toronto can successfully put together an image based on EEG readings processed through a machine learning algorithm.

Researchers Can Now Use Brainwaves to Reconstruct What People See
Dan Nemrodov (left) and Adrian Nestor (center) talk a subject through the study. Ken Jones/University of Toronto

For decades, brainwaves and other measures of brain activity only told us how the brain responded to an image. But now, researchers have discovered a new technique to use those brainwaves to reconstruct the images that stimulated that response in a person. 

Neuroscientists from the University of Toronto Scarborough can take electroencephalography (EEG) data and effectively work backwards. The process was developed by postdoctoral fellow Dan Nemrodov and Assistant Professor Adrian Nelson alongside other students. 

"When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process," said Nemrodov.

The team hooked test subjects up to EEG machines and were then shown pictures of people's faces. The reader recorded their brain activity. The researchers then processed the activity to digitally recreate an image based on how a machine learning algorithm analyzed the information. 

"fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale. So we can see with very fine detail how the percept of a face develops in our brain using EEG," Nemrodov explained.

The study concluded that it takes the human brain an approximate 0.17 seconds to form a good idea of the face that flashes before our eyes. But that short amount of time is all the team needed to track the brain's response. The researchers expanded upon work done by Nestor, who originally pioneered the first method of reconstructing images. 

The future implications could be huge

Nestor, Nemrodov and the other neuroscientists think that this style of image rebuilding overcame all possible limitations they predicted at the start of the study. The implications could be huge; using EEG data in partnership with machine learning technology could expand what was previously thought possible without having access to expensive neurological equipment. 

Advertisement

"It could provide a means of communication for people who are unable to verbally communicate. Not only could it produce a neural-based reconstruction of what a person is perceiving, but also of what they remember and imagine, of what they want to express," Nestor said in a press statement

"It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects rather than relying on verbal descriptions provided to a sketch artist."

Electronics

Intel Builds Chip to Function Like the Human Brain

While forensic artists do as much as they can given the details they're provided, they can only do so much -- especially if the person being questioned didn't get a good look at the possible perpetrator. This technology could bypass the need for a forensic artist or serve as secondary validation for an artist's sketch based on what he or she is told by a witness. 

"What’s really exciting is that we’re not reconstructing squares and triangles but actual images of a person’s face, and that involves a lot of fine-grained visual detail," added Nestor.

Advertisement

"The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities. It unveils the subjective content of our mind and it provides a way to access, explore and share the content of our perception, memory and imagination."

Advertisement