Neuroscientists use brain waves to reconstruct Pink Floyd song

The team of scientists from the University of California, Berkeley, was able to reconstruct Pink Floyd's 'Another Brick in the Wall' from the brain waves of 29 epileptic patients using nonlinear models.
Sejal Sharma
Representational image
Representational image

Getty Images 

Scientists have previously succeeded in predicting the words of a person engaged in a normal conversation by simply decoding electrical activity in the brain’s temporal lobe.

Eleven years later, a team of scientists from the same laboratory at the University of California, Berkeley, was able to reconstruct a Pink Floyd song from the brain waves of 29 people using nonlinear models (meaning that the output changes by different amounts due to different changes in the input).

"Noninvasive techniques are just not accurate enough today. Let's hope, for patients, that in the future we could, from just electrodes placed outside on the skull, read activity from deeper regions of the brain with a good signal quality. But we are far from there," said Ludovic Bellier, postdoctoral fellow and co-author of the study, in a press release.

2,668 electrodes implanted into the brain

The scientists chose to play the iconic band’s song ‘Another Brick in the Wall, Part 1’ in the Albany Medical Center hospital suite in New York as neuroscientists prepared to conduct surgeries on the patients.

You can listen to the reconstructed song here.

The study had a cohort of 29 patients with epilepsy and all patients volunteered and gave their written informed consent prior to their participation. The patients had strips of 2,668 electrodes surgically implanted in their brains.

They passively listened to the 1979 hit as they were being prepared for epilepsy surgery. They were instructed to listen attentively to the 190.72-second-long song without focusing on any special detail.

AI's role in the study

“In addition to stimulus reconstruction, we also adopted an encoding approach to test whether recent speech findings generalize to music perception. Encoding models predict neural activity at one electrode from a representation of the stimulus,” said the study.

The team then used artificial intelligence software to decode the neural activity and was able to reconstruct the song from the brain recordings. This is the first time a song has been reconstructed from intracranial electroencephalography (iEEG) recordings.

The scientists believe this could be groundbreaking for people who have trouble communicating. Recordings from electrodes on the brain surface have been previously used to decipher speech, but the scientists’ current explorations could help reproduce the musicality of speech, which would be an upgrade from today’s robot-like reconstructions.

"It's a wonderful result," said Robert Knight, a professor at UC Berkeley and co-author of the study. "One of the things for me about music is it has prosody and emotional content. As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who's got ALS or some other disabling neurological or developmental disorder compromising speech output.”

The scientists are also hopeful that someday, recording neural activities will be possible without invasive surgeries involving opening up the brain by way of using sensitive electrodes attached to the scalp.

The study was published in the journal PLOS Biology.

Study abstract:

Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior–posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain–computer interface (BCI) applications.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board