How your keyboard sounds can expose your data to AI hackers

Researchers have developed a system that can guess passwords by listening to the sound of typing on online meetings and video calls.
Rizwan Choudhury
Typing on a laptop keyboard.
Typing on a laptop keyboard.

Credits: isil terzioglu/iStock 

A new study has warned that hackers can use artificial intelligence (AI) to guess your passwords by listening to the sound of your typing during a Zoom call. The researchers say that video conferencing tools like Zoom have increased the risk of cyber-attacks based on sounds, as most devices have built-in microphones.

The study, published in the IEEE European Symposium on Security and Privacy Workshops, shows how the researchers used machine learning algorithms to create a system that can identify which keys are being pressed on a laptop keyboard with more than 90 percent accuracy based on sound recordings.

Tested using a MacBook Pro on Zoom call

The researchers from the University of Surrey, Durham University, and Royal Holloway, University of London, pressed each of 36 keys on a MacBook Pro, including all of the letters and numbers, 25 times in a row, using different fingers and with varying pressure. The sounds were recorded both over a Zoom call and on a smartphone near the keyboard.

The team then trained a machine learning system to recognize features of the acoustic signals associated with each key. They tested the system on the remaining data and found that it could accurately assign the correct key to a sound 95 percent of the time when the recording was made over a phone call and 93% of the time when it was made over a Zoom call.

The study is not the first to show that sound can identify keystrokes, but the team says their study uses the most advanced methods and has achieved the highest accuracy so far.

Dr. Ehsan Toreini, who co-authored the study at the University of Surrey, said these attacks and models will be more accurate over time. He also said that as more households use smart devices with microphones, there is a need for public discussions on how to regulate AI. Joshua Harrison, who led the study, said that detecting the release of a shift key is very difficult.

Highlighting public awareness and concerns

The researchers say their work is a proof-of-concept study and has not been used to crack passwords or in real-world settings like coffee shops. However, they say their work highlights the need for public awareness and debate on the governance of AI, as such acoustic “side-channel attacks” could threaten any keyboard.

The researchers suggest ways to reduce the risk of such attacks, such as using biometric passwords or two-step verification systems. Alternatively, they say using the shift key to create a mixture of upper and lower cases or numbers and symbols is a good idea.

The study was published in IEEE European Symposium on Security and Privacy Workshops.

Study abstract:

With recent developments in deep learning, the ubiquity of microphones and the rise in online services via personal devices, acoustic side channel attacks present a greater threat to keyboards than ever. This paper presents a practical implementation of a state-of-the-art deep learning model in order to classify laptop keystrokes, using a smartphone integrated microphone. When trained on keystrokes recorded by a nearby phone, the classifier achieved an accuracy of 95%, the highest accuracy seen without the use of a language model. When trained on keystrokes recorded using the video-conferencing software Zoom, an accuracy of 93% was achieved, a new best for the medium. Our results prove the practicality of these side channel attacks via off-the-shelf equipment and algorithms. We discuss a series of mitigation methods to protect users against these series of attacks.