Discovery of How Brain Encodes Speech May Give Voice to The Disabled
Despite his handicap, Stephen Hawking became one of the most powerful voices in modern science.
[see-also]
During the last decades of his life, the renowned astrophysicist was also associated with his chair and computer that rendered his thoughts into a robotic word.
Hawking might be the most famous example of people with limited speech, but he's not the only speaker restricted due to a muscular paralysis.
Researchers want to help people with restricted speech unlock their communication with a new brain interface. This new machine would decode what the brain is sending to the tongue, palate, lips, and larynx, and translate it all into words.
Creating a machine to interpret 'silent' speech
The innovative brain machine interface (BMI) is the result of research from Northwestern Medicine and Weinberg College of Arts and Sciences. The team discovered how the brain controls speech in a similar function to how it would control an arm or a leg.
To track the brain, the researchers recorded signals from two separate parts of the brain. They discovered the brain has two separate goals in language -- what we're trying to say (speech sounds) and the individual movements needed on the part of our body to get those words and thoughts out.
That understanding was critical in creating a new type of BMI.
"This can help us build better speech decoders for BMIs, which will move us closer to our goal of helping people that are locked-in speak again," said lead author Marc Slutzky, associate professor of neurology and of physiology at Northwestern University Feinberg School of Medicine and a Northwestern Medicine neurologist.
How the brain transforms words into speech
The mechanics behind your words consist of phonemes, made by coordinated movements from the lips, tongue, and other areas. How those gestures are planned remained a mystery for scientists until now.
"We hypothesized speech motor areas of the brain would have a similar organization to arm motor areas of the brain," Slutzky said. "The precentral cortex would represent movements (gestures) of the lips, tongue, palate and larynx, and the higher level cortical areas would represent the phonemes to a greater extent."
According to the team, that's what they discovered.
"We studied two parts of the brain that help to produce speech," Slutzky said. "The precentral cortex represented gestures to a greater extent than phonemes. The inferior frontal cortex, which is a higher level speech area, represented both phonemes and gestures."
Next steps for unlocking more speech
The Northwestern team recorded brain signals using electrodes placed at the cortical surface. They used patients undergoing brain surgery to get rid of brain tumors as their subjects because they had to be awake during the surgery. While the patients were under the knife, the Northwestern team asked them to read a handful of words from a screen.
The scientists then marked all the times the patients made gestures and phonemes. They recorded the brain signals from the cortical areas to see which phonemes and gestures were created.
The next step for the team is to develop an algorithm that lets the brain interface both decode gestures and also use those decoded gestures to form words and ultimately speech.