Many things have been said of Tom Hanks. He's an actor, and a good one. His reputation as an iconic everyman precedes him. But recently he wasn't himself at all: the image people saw of Hanks at the Black Hack computer security conference was a deepfake developed by machine-learning algorithms, not a film studio, according to a blog post on FireEye's website.
RELATED: GENERATIVE ADVERSARIAL NETWORKS: THE TECH BEHIND DEEPFAKE AND FACEAPP
Deepfakes becoming cheaper, easier to pull off
A data scientist at the security company FireEye named Philip Tully generated "hoax Hankses" to see how simple it is to use open-source software from artificial intelligence labs to launch misinformation campaigns. "People with not a lot of experience can take these machine-learning models and do pretty powerful things with them," said Tully, reports Wired.

When displayed at full resolution (on the right of the above image), there are minor flaws — like unnatural skin textures and neck folds. But they're still relatively accurate reproductions of familiar details of Hanks' face. His green-gray eyes, and the way his brow furrows, for example. Reduced to the size of a thumbnail on social networks, AI-crafted images might pass for the real deal.
To develop the deepfake Hanks images, Tully only used a few hundred images of the actor, spending less than $100 to fine-tune an open-source face-recognition software to match Hanks' appearance.
FireEye project could raise concerns about deepfake campaigns
In demonstrating how cheap and easy it is to generate passable images, the FireEye project might elicit extra concerns that web-based misinformation campaigns might be exacerbated by AI technology that looks and speaks like real public figures.
These techniques and the images they put out are typically called deepfakes — a term coined from the name of a Reddit user who in late 2017 posted pornographic clips adapted to include the faces of celebrities.
For now, we can rest easy — most internet-based deepfakes are of low enough quality to recognize, and are mainly made for entertainment (or pornographic) reasons. The most widely documented malicious use of deepfakes involves the harassment of women. While this is serious, it might pale in comparison to hypothetical instances of malicious use, say, a deepfake of a president ordering the launch of a nuclear bomb.