ChatGPT falls short of students in math and economics
ChatGPT is one of the most used generative AI tools in the world right now. With the ability to learn and train itself after interacting with people, ChatGPT has proven to be a valuable tool in many industries. Specifically for school-aged and university students.
It has the ability to produce succinct, comprehensive, and structured answers. And that’s why it’s difficult to differentiate between an essay written by a research student and a chatbot.
So, when chatbots like ChatGPT cropped up, so did anti-ChatGPT LLMs over plagiarism concerns. These AI detectors are aimed at finding out if a text has been generated by ChatGPT. And now, researchers at New York University Abu Dhabi have examined the potential of ChatGPT as a tool for plagiarism.
Testing ChatGPT's superiority in academics
The team compared the chatbot’s performance with students across 32 university-level courses from eight disciplines. The team found that ChatGPT’s performance is comparable, or even superior, to that of students in nine out of the 32 courses.
Apart from courses focused on mathematics and economics, ChatGPT could be at par with or outperform students studying data structures, public policy, quantitative synthetic biology, cyber warfare, object-oriented programming, structure and properties of civil engineering materials, biopsychology, climate change, and management and organizations.
The ethical implications of using ChatGPT in classrooms or essays have been discussed and debated, with many educational institutions, at first, banning the use of chatbots but then later coming around to incorporating them in their teaching methods. It’s still up for debate as to what’s plagiarized and what’s not because ChatGPT runs on a large language model that is trained using vast amounts of data. And the chatbot doesn’t exactly know which source it’s basing its answers from.
So, the team also evaluated the existing algorithms designed specifically to detect ChatGPT-generated text and assessed if a person is able to trick this algorithm into thinking that a particular text is not the work of ChatGPT. The team found that the anti-ChatGPT algorithm tends to misclassify human answers as AI-generated and vice-versa. A student can obfuscate the algorithm, which makes its use futile as it then fails to detect 95% of ChatGPT answers.
Students will continue to use ChatGPT
This bias was also pointed out in another study by Stanford researchers when an AI detector classified over half of TOEFL essays (61.22%) written by non-native English students as AI-generated. Researchers of that study pointed out that non-native speakers typically score lower on common perplexity measures such as lexical richness, lexical diversity, syntactic complexity, and grammatical complexity. Text written by AI tools like ChatGPT also scores low on such perplexity measures.
The team also took a survey of 151 undergraduate students and 60 professors to understand their perception of the chatbot. There was consensus among students that they would use ChatGPT in their assignments and essays, while the professors saw its use as plagiarism. Additionally, the students unanimously said that, in their future jobs, they will outsource mundane tasks to ChatGPT, allowing them to focus on substantive and creative work.
“The inherent conflict between these two poses pressing challenges for educational institutions to craft appropriate academic integrity policies related to generative AI broadly, and ChatGPT specifically. Our findings offer timely insights that could guide policy discussions surrounding educational reform in the age of generative AI,” noted the researchers in their study.
The study was published in the journal Scientific Reports.
Study abstract:
The emergence of large language models has led to the development of powerful tools such as ChatGPT that can produce text indistinguishable from human-generated work. With the increasing accessibility of such technology, students across the globe may utilize it to help with their school work—a possibility that has sparked ample discussion on the integrity of student evaluation processes in the age of artificial intelligence (AI). To date, it is unclear how such tools perform compared to students on university-level courses across various disciplines. Further, students’ perspectives regarding the use of such tools in school work, and educators’ perspectives on treating their use as plagiarism, remain unknown. Here, we compare the performance of the state-of-the-art tool, ChatGPT, against that of students on 32 university-level courses. We also assess the degree to which its use can be detected by two classifiers designed specifically for this purpose. Additionally, we conduct a global survey across five countries, as well as a more in-depth survey at the authors’ institution, to discern students’ and educators’ perceptions of ChatGPT’s use in school work. We find that ChatGPT’s performance is comparable, if not superior, to that of students in a multitude of courses. Moreover, current AI-text classifiers cannot reliably detect ChatGPT’s use in school work, due to both their propensity to classify human-written answers as AI-generated, as well as the relative ease with which AI-generated text can be edited to evade detection. Finally, there seems to be an emerging consensus among students to use the tool, and among educators to treat its use as plagiarism. Our findings offer insights that could guide policy discussions addressing the integration of artificial intelligence into educational frameworks.