Breaking the silence: Cornell researchers build sonar glasses for communication without words

Communication without boundaries. “We're moving sonar onto the body."
Abdul-Rahman Oladimeji Bello
Ruidong Zhang wearing EchoSpeech glasses
Ruidong Zhang wearing EchoSpeech glasses

Cornell University 

Cornell University researchers have developed a new technology allowing silent communication through sonar glasses. The glasses use tiny microphones and speakers to read the words that are silently mouthed by the wearer, allowing them to perform various tasks without needing physical input. The technology was developed by Ruidong Zhang, a Ph.D. student at Cornell, and builds off of a similar project that used a wireless earbud and previous models that relied on cameras.

Highly-Accurate Design

The glasses are designed to be unobtrusive and not require the user to face a camera or wear an earbud. Instead, the glasses use sonar to sense mouth movements while using a deep learning algorithm to analyze echo profiles in real time. This allows the system to achieve around 95 percent accuracy in recognizing the words being silently mouthed by the wearer.

One of the most exciting prospects for this technology is for individuals with speech disabilities to use it to silently feed dialogue into a voice synthesizer, which would then speak the words aloud. The glasses could also be used to control music playback controls in a quiet library or to dictate a message at a loud concert where standard options would fail.

The technology is designed to be minor, low-power, and privacy-sensitive, with no data leaving the user's phone. This way, there would be no privacy concerns. The glasses also have a  form factor that removes the need to face a camera or put something in your ear. So it is more practical and feasible than other available silent-speech recognition technologies.

According to Cheng Zhang, Cornell assistant professor of information science, "Most technology in silent-speech recognition is limited to a select set of predetermined commands and requires the user to face or wear a camera, which is neither practical nor feasible. We're moving sonar onto the body."

The researchers say the system only requires a few minutes of training data to learn a user's speech patterns. Once ready to work, it sends and receives sound waves across the user's face, sensing mouth movements while using a deep learning algorithm to analyze echo profiles. 

The current version of the glasses offers around 10 hours of battery life for acoustic sensing and offloads data processing wirelessly to the user's smartphone, allowing the accessory to remain small and unobtrusive.

The team at Cornell's Smart Computer Interfaces for Future Interactions (SciFi) Lab is exploring commercializing the tech using a Cornell funding program. They're also looking into smart-glasses applications to track facial, eye, and upper body movements.

"We think glass will be an important personal computing platform to understand human activities in everyday settings," said Cheng Zhang.

Overall, the sonar glasses the Cornell University researchers developed represent a significant breakthrough in silent-speech recognition technology. With the ability to recognize a wide range of words and phrases, the glasses can revolutionize how we interact with technology and each other, whether it's controlling music playback, dictating messages, or helping individuals.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board