Advertisement

OpenAI Just Released an Even Scarier Fake News-Writing Algorithm

The company still has a stronger version of the AI under wraps.

OpenAI, the AI company that Elon Musk founded and then quit, has just released a more powerful version of its AI text-writing software.

The company still won't release their full software - that can be used to write fake news and messages en masse - due to fears it might be misused.

RELATED: MICROSOFT TO INVEST A WHOPPING $1 BILLION IN OPENAI PARTNERSHIP

What does OpenAI do?

OpenAI says its text-writing system is so advanced it can write news stories and even fiction that passes as human.

A user can feed the system text - anything from a few sentences to pages of itand the system will then continue that same text in an uncannily well-written, contextually relevant, human style.

However, after releasing its original system, GPT-2, in February, the company said the full software was too dangerous to release to the public - a weaker version was made available.

Now, the company has announced it has released a version of GPT-2 that is six times more powerful.

You can actually try the latest public OpenAI system at TalkToTransformer.com. The results can be eerily realistic - though there are obvious flaws in the writing.

OpenAI is still being careful

According to OpenAI’s statement, there’s still an even more powerful version of GPT-2 that the company hasn't yet revealed.

The company says that it plans to release the more powerful model within a few months, but that it may not if it determines that people are using the new, stronger GPT-2 maliciously.

At the time of the original announcement of GPT-2's release in February, Jack Clarke, OpenAI’s head of policy, told The Guardian there are “many people who are better than us at thinking what [the AI] can do maliciously.”

It could be used, for example, to generate infinite fake positive, or negative, reviews – as if written by a real person.

A cure for fake news?

While OpenAI brings us closer to AI world domination, a group of Harvard and MIT researchers has been developing a method to use AI to fight AI.

The researchers developed a system, dubbed GLTR, that uses an algorithm to track the likelihood that a passageway was written by AI or not.

It will be interesting to see if GLTR ever comes up against GPT-2's strongest version - if it's ever released to the public, that is. The AI wars may be upon us.

Advertisement

Stay on top of the latest engineering news

Just enter your email and we’ll take care of the rest:

By subscribing, you agree to our Terms of Use and Privacy Policy. You may unsubscribe at any time.

Advertisement