Russian hackers are using ChatGPT to write malicious pieces of code
Hackers in Russia are not just keen on leveraging ChatGPT to write pieces of malicious code but have also succeeded in bypassing the geofencing meant to keep them away from the platform, Business Insider reported.
ChatGPT, the chatbot launched by OpenAI to demonstrate the advances made in artificial intelligence (AI) research, has become famous thanks to its conversational tone of interacting with the user. However, dig a little deeper, and the bot can help you write college essays, poems for a loved one, and even short stories to tingle your imagination.
However, hackers in Russia have gone a bit too far with theirs and used the chatbot to write malicious code. That ChatGPT could help write software code, perhaps better than an entry-level programmer, was well known. That hackers could use it to write malicious pieces of code was probably even anticipated but that they would achieve it so soon was something that wasn't expected.
Russian hackers tapping into ChatGPT
ChatGPT amassed more than a million users just within a week of launch. However, not known to many is that the service remains geofenced, which means that people in certain parts of the world cannot access it at all.
This was done to ensure that the service is not used for nefarious purposes. However, according to a report from the cyber-security company, Check Point Research (CPR), Russian cybercriminals have been attempting to bypass restrictions by Open AI to avoid such use of the technology.
Underground hacking forums are rife with discussions on how to circumvent the restrictions on IP addresses, payment cards, and phone numbers; all needed to gain access to ChatGPT from Russia. The report said that hackers in the region were looking to use the chatbot service in their day-to-day operations since it can make the entire exercise more cost-efficient.
Accessible AI, Dangerous AI?
Many have wondered if the rise of AI would lead to a loss of jobs for humans. However, it looks like the hacking community is guiding the way on how technology could be used to streamline work and make it more efficient alongside humans.
A thread shared on a popular underground hacking forum last month was titled, "ChatGPT- Benefits of Malware" and the published stated that he was using the chatbot service to recreate malware strains and techniques that have been published in research publications, Business Insider said in its report.
Before this, another user boasted about creating his first Python script, which other users found similar to the writing style of OpenAI. The user confirmed that he had used OpenAI to write the code, raising fears that even actors with little skills could potentially become dangerous with the chatbot as an aide.
While targetted attacks could still be countered, the availability of the chatbot across users could also end up spreading misinformation and fake news. This is a threat that Open AI is well aware of, and has been working to understand how its AI model could be misused in this area.
A team of researchers at the Center for Security and Emerging Technology at Georgetown University and Stanford Internet Observatory are looking into this threat and have outlined steps that can be taken to prevent such use at scale. The analysis and recommended steps were published in a report released last week.
It's not as simple as a photon "traveling into the past". Instead, it involves a single light particle evolving in "a superposition of time evolutions."