7 Ways Data and AI Can Be Used to Trick and Deceive the Public

Data increasingly dictates how we live our lives.
Chris Young
1, 2

"Data is the oil, some say the gold, of the 21st century," Siemens CEO, Joe Kaeser, said in 2018.

A very recent example of the power of data is the fact that today, as large parts of the world's population are confined indoors, the founder of video meeting company Zoom has seen his net worth rise by $2 billion while airlines go bust during the COVID-19 pandemic.

Here are 7 ways that data has been used to deceive individuals and the public in surprising ways.

1. Traffic hackers bringing cities to a halt

Hackers have been shown to be able to hack into the grid — with worrying ease — and use data in order to affect traffic in various ways.

In 2014, Cesar Cerrudo, an Argentinian security researcher with IoActive examined the vehicle traffic control system installed at major U.S. cities and presented his findings at the Infiltrate conference in Florida. It showed that they can be manipulated to bring traffic to a standstill or to force cars to change their routes. 

RELATED: GOOGLE'S DEEPMIND RELEASED AI PREDICTIONS TO CURB COVID-19 OUTBREAK

A 2015 demonstration by hackers Charlie Miller and Chris Valasek, meanwhile, showed how they could remotely hack a Jeep Cherokee that was driving on the highway by hacking into its smart system.

While these methods only showed how data could be manipulated via the hacking of large grids, there are also real-life examples of hackers affecting traffic. Just last month it was revealed that a man used a wagon filled with 99 smartphones — all using Google Maps driving navigation — to trick the app into falsely alerting people of traffic jams on streets that were really empty.

2. Deepfake videos manipulating the masses

Deepfake videos and altered videos are getting so advanced that they are increasingly harder to spot. Many believe AI deepfake tools, that allow people to superimpose the face of a politician or actor onto a video and also convincingly replicate their voice, could be a real threat to democracy.

In May 2019, Donald Trump posted a video that had gone viral of Nancy Pelosi appearing to drunkenly slur her way through a speech. The video was quickly debunked — someone had altered the original footage to slow down Pelosi's speech while raising the pitch to make it sound like natural slow speech.

The video was viewed millions of times and Trump, notably, didn't remove the video from his social media after it was debunked.

Last year, a deepfake voice of a CEO was even used to steal $250,000 from a company. In order to try to counter the deepfake problem, Google recently released a large dataset of deepfake faces alongside original clips.

Google Releases Creepy Deepfake Dataset to Help Developers Create Detection Methods
Source: FaceForensics/GitHub

As Google pointed out in a blog post at the time, the videos were "created to directly support deepfake detection efforts."

3. Cambridge Analytica and the harvesting of personal data

In 2018, it was revealed that Cambridge Analytica, a company that helped presidential social media campaigns, had harvested the personal data of millions of people's Facebook profiles without their consent.

This data was used to send targeted ads based on psychological profiles that were put together by analyzing the content of the Facebook pages.

Cambridge Analytica has been described as a turning point in the publics' perception of just how powerful data is, and how it can be used to manipulate entire populations. It also led to a huge fall in the stock price, and public admiration, of Facebook and a lot of other big tech companies that were subsequently scrutinized for the way that they use data. 

4. People catching cheating partners

In some cases, data can be used to catch people out during an act of deception. One example saw an NFL reporter, Jane Slater, catch her cheating partner via data from a Fitbit wearable wristband.

The now ex-partner gave the Fitbit as a Christmas gift so they could track each other's activity levels and motivate each other to exercise more.

In a tweet, Slater described how she didn't hate it until "he was unaccounted for at 4 am and his physical activity levels were spiking on the app."

Of course, while this clearly wasn't the case in Jane Slater's story, tracking data can also be used maliciously by stalkers

5. Smart homes hacked and controlled

There's a real danger that the "smarter" an Internet of Things (IoT) home, device, or vehicle gets, the more vulnerable it is to hackers.

This is especially the case for our homes, where a large amount of interconnected smart devices form part of the "smart home." While they provide many benefits for homeowners, it allows hackers many digital entryways into the home.

There are several things hackers could do when it comes to smart homes: smart lock systems can be hacked, allowing hackers to lock people into their own homes; security systems can be disabled, allowing for a physical home invasion; and home appliances can be hacked and manipulated.

Only last year, a Milwaukee couple reported that hackers broke into their Google Nest smart home device — the hackers raised the thermostat and blasted vulgar music through wireless speakers.

6. Bringing relatives back to "life"

Griefbots are already a thing in 2020. The ethical implications surrounding them were memorably tackled in the Black Mirror episode 'Be Right Back,' in which a woman chatted to a version of her dead husband before having a lifelike version of him sent in the post.

As data scientist Muhammad Ahmad told The Daily Beast in 2018, he spent years collecting data his father had left behind, such as audio or video recordings, text messages, and transcripts of letters, so that he could allow his daughter — who never met her grandfather — to chat to a digital avatar version of the deceased. 

Is this manipulation, or simply a new way to help us grieve? These bots are likely not going away any time soon, so it's a question we will continue to grapple within the coming years.

7. Coronavirus and "sextortion" scams

Sadly, scams are rife since almost everyone has been confined to their homes due to the coronavirus outbreak. One online scam that has been used by hackers for years, the "sextortion" scam, typically sees a victim receive a message from an anonymous person claiming to have nude pictures of the victim.

If they don't want their entire family and social circle to see the pictures, they need to send over money — usually in the form of bitcoin. Typically, the hacker will have obtained the personal data of the victim and reveal details to make the victim scared. 

Reports are emerging that this scam has been adapted, to attack people vulnerable and scared of COVID-19. As The Guardian reports, one such message said "You need tο pay me $4,000. You’ll make the payment via bitcoin … If I do not get the payment: Ι will infect every member οf your family with the coronavirus."

Data is increasingly dictating how we live our lives. As data manipulation, by individuals, hackers, and organizations, becomes more prevalent, it is important that people are protected and also keep themselves safe. 

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board