Approved Apps Turn Google Home and Alexa into 'Smart Spies'

Hackers were able to spy on users and phish passwords.
Chris Young

Smart home assistants are becoming so ubiquitous that some companies have even started creating hardware you can attach to the devices to stop them from listening to you.

Now, a new threat gives us more proof that these fears aren't in the least bit unfounded.

Researchers over at Security Research Lab have shown they were able to create malicious third-party apps and have them hosted by Amazon and Google. These apps would allow a hacker to spy on smart assistant users and steal their personal information.


Testing smart assistant safety

The research hackers at Germany's Security Research Lab recently developed eight apps, four for Alexa and four for Google Home, all of which passed the companies' security tests.

On the surface, these apps were simple horoscope applications, except for one, which posed as a random number generator. In reality, they were "smart spies," that had the capacity to allow the researchers to spy on users and phish for their passwords.

"It was always clear that those voice assistants have privacy implications — with Google and Amazon receiving your speech, and this possibly being triggered by accident sometimes," Fabian Bräunlein, senior security consultant at SRLabs, told Ars Technica.

"We now show that, not only the manufacturers, but... also hackers can abuse those voice assistants to intrude on someone's privacy."

Fake errors and updates

The SRL Labs researchers used different methods, via the apps, to show that it is possible to develop third-party apps that allow a hacker to eavesdrop on conversations.

The apps could be triggered to send fake error messages, giving the impression they were no longer running. In reality, they were silently running in the background allowing the app developers to listen to mic recordings.

Most Popular

The researchers released two videos, showing how these apps work on each separate smart assistant. 

For phishing attacks, the apps used a fake voice that greatly resembles those of the smart assistants. The voice claims a device update is needed and asks the user for their password.

SRL Labs posted a document detailing the exact flaws they manipulated. 

The researchers privately showed the results of their work to Google and Amazon before releasing them to the public.

Both companies have released statements saying they are improving their safety review process.

message circleSHOW COMMENT (1)chevron