AI detectors falling short in the battle against cheating

Despite their potential, AI detectors often fall short of accurately identifying and mitigating cheating.
Abdul-Rahman Oladimeji Bello
ChatGPT Artificial intelligence App

In the age of advanced artificial intelligence, the fight against cheating, plagiarism and misinformation has taken a curious turn. 

As developers and companies race to create AI detectors capable of identifying content written by other AIs, a new study from Stanford scholars reveals a disheartening truth: these detectors are far from reliable. Students would probably love to hear this. 

Following the highly publicized launch of ChatGPT, several developers and companies introduced their own AI detectors. These algorithms were touted as tools to aid educators, journalists, and others in identifying instances of cheating, plagiarism, and the spread of misinformation. 

However, the Stanford study as reported in Techxplore.com reveals a significant flaw. The detectors lack reliability, especially when human authors are non-native English speakers.

The statistics paint a grim picture. While the detectors exhibited near-perfection in evaluating essays written by US-born eighth-graders, they mistakenly categorized over half (61.22%) of the Test of English as a Foreign Language (TOEFL) essays written by non-native English students as AI-generated.

The situation worsens when examining the data a little closer. The study found that all seven AI detectors unanimously identified 18 out of 91 TOEFL student essays (19%) as AI-generated. An astonishing 89 out of 91 essays (97%) were flagged by at least one of the detectors.

Why the detectors falter in their accuracy 

Professor James Zou, a biomedical data science expert at Stanford University and the senior author of the study, explains the reason behind the detectors' unreliability. "Detectors typically evaluate content based on 'perplexity,' a metric that correlates with the sophistication of the writing," Zou states. 

"Naturally, non-native English speakers tend to trail behind their US-born counterparts in terms of linguistic complexity, resulting in lower perplexity scores."

Zou and his co-authors emphasize that non-native speakers often score lower on perplexity measures such as lexical richness, lexical diversity, syntactic complexity, and grammatical complexity.

These concerning numbers raise important questions about the objectivity of AI detectors. They highlight potential instances where foreign-born students and workers might face unfair accusations or even penalties due to false cheating claims. The ethical implications of such scenarios are a cause for concern.

Furthermore, Zou points out that the detectors can be easily undermined through a technique known as "prompt engineering."

By requesting generative AI systems to "rewrite" essays with more sophisticated language, students attempting to cheat can easily bypass the detectors. Zou provides a simple example, where a student utilizing ChatGPT might input AI-generated text with the prompt: "Elevate the provided text by employing literary language."

"Current detectors are clearly unreliable and easily manipulated, making them an unreliable solution to the AI cheating problem," Zou warns.

So, what is the way forward? Zou suggests a few possible approaches. In the short term, he recommends avoiding reliance on detectors in educational settings, particularly where there is a significant number of non-native English speakers. 

Developers should also move beyond perplexity as the primary metric and explore more sophisticated techniques. They should also consider inserting subtle clues about AI identity, like watermarks, into the generative content. Ultimately, models need to become less vulnerable to circumvention.

"At this time, the detectors are too unreliable, and the consequences for students are too high to put our faith in these technologies without rigorous evaluation and significant improvements," Zou concludes.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board