Advertisement

Big Whoop: DARPA Makes Oh-So-Clever Online Sarcasm Detector

A new algorithm showed 'nearly perfect sarcasm detection' on a large dataset of Tweets.

Throwing that proverbial shade repercussion-free might become a little harder online thanks to a new algorithm — trained on large datasets from sources including Twitter, Reddit, and The Onion — that detects patterns of sarcasm in text.

DARPA's Information Innovation Office (I2O) collaborated with researchers from the University of Central Florida to develop the deep learning AI algorithm, which detects written sarcasm with a surprising degree of accuracy. The team behind the algorithm published their findings in the journal Entropy.

Overcoming the online sarcasm barrier

Much has been said over the years of the limitations surrounding online communication. In his first presentation of Neuralink's early brain interface technology, Elon Musk pointed out the fact that "output speed is especially slow because most people [are] typing with thumbs" on their smartphones.

Speed and bandwidth have also been focuses of innovation, with technologies like 5G, allowing increasingly speedy connectivity and stable online communication.

Until now, the ability for AI to understand sarcasm online hasn't been seen as a large area for innovation in online communication — if anything, sarcastic memes are a rite of passage that allow people online to connect with like-minded typers on a human level.

That's where DARPA's new machine learning algorithm comes in — the AI was trained on years worth of sarcastic online communication, much like your average millennial.

Boosting AI's sentiment analysis

At the root of DARPA's new AI is the concept of sentiment analysis, which the organization describes as a growing area of research for commercial and defense communities due to the fact that it can be used to automate analysis of anything from customer feedback to communications between potential bad actors in society at a mass scale.

Advertisement

"Sarcasm has been a major hurdle to increasing the accuracy of sentiment analysis, especially on social media, since sarcasm relies heavily on vocal tones, facial expressions, and gestures that cannot be represented in text," Brian Kettler, a program manager in DARPA’s Information Innovation Office (I2O), explained in a press statement.

"Recognizing sarcasm in textual online communication is no easy task as none of these cues are readily available," Kettler continued.

In their study, the researchers demonstrated an "interpretable deep learning model" that interprets words from text datasets, that can range from Tweets to private online messages.

The University of Central Florida team's method differs from that of previous studies — such as one that trained an AI on 5000 different examples of real and sarcastic news headlines — due to the fact it is "language agnostic."

Advertisement

This means that the team essentially trained their algorithm on a huge set of sarcastic messages and allowed it to find patterns, instead of training it to look for specific words or parts of speech from the outset.

In its press statement, DARPA explained that using "recurrent neural networks and attention mechanisms, the model tracks dependencies between the cue-words and then generates a classification score, indicating whether or not sarcasm is present."

'State-of-the-art' sarcasm detection

According to DARPA, the new algorithm achieved "state-of-the-art results on multiple datasets from social networking platforms and online media." The model achieved a "near-perfect sarcasm detection" score on a major Twitter benchmark dataset as well as "state-of-the-art results" on four other large datasets, the team explained.

Given the fact that this AI model was part-developed by DARPA, with defense capabilities in mind, we imagine it may be used in the future to filter out bot messages aimed at proliferating negative messages or for separating shitposts from more serious messages to allow AI to better analyze public reactions.

Advertisement

Or, as DARPA puts it, the model could allow for a more "quantitative understanding of adversaries' use of the global information environment."

So, while the model might be useful for helping government organizations to analyze online communication at a large scale and gauge public perceptions of initiatives and regulations, it's doubtful we'll be getting sarcasm alerts on our smartphones any time soon.

Follow Us on

Stay on top of the latest engineering news

Just enter your email and we’ll take care of the rest:

By subscribing, you agree to our Terms of Use and Privacy Policy. You may unsubscribe at any time.