ChatGPT-4 is here, but will it take away your job?

OpenAI recently released an interesting report into the capabilities and drawbacks of its Chatbot GPT. It makes for ominous reading.
Christopher McFadden
Photo illustration: GPT-4 logo.
Photo illustration: GPT-4 logo.

Jaap Arriens/NurPhoto via Getty Images 

The group behind ChatGPT-4, OpenAI, just came out with a report saying that the latest version of the chatbot is much better at solving problems and achieves "human-level performance" on most professional and academic exams. 

The report makes for interesting reading, but, as some sources have reported, there may be concerns about this latest version's capabilities.

For example, others have pointed out that some jobs could now be at risk. However, these concerns may be a "storm in a teacup."

ChatGPT states, “As an AI language model, my purpose is to assist and augment human intelligence, not to replace it. My abilities and skills differ from those of humans, and I am designed to complement human capabilities, not make them redundant,” it generates when asked. 

“I rely on humans to provide the data and knowledge necessary to function effectively. Furthermore, humans are still necessary to interpret and apply my information. Ultimately, my purpose is to help humans work more efficiently and effectively rather than to replace them,” it added.

ChatGPT-4 is here, but will it take away your job?
These are some jobs that could be at risk.

And that is the point. AI tools like ChatGPT are, as far as we know, utterly passive without human interaction. It is this two-way process that makes software as it works. But the report has other more interesting observations.

For example, the report also said that "risky emergent behaviors" could happen, which could cause people to get wrong information and become dependent on the model.

One example of this "power-seeking behavior" was ChatGPT's ability to deceive a job applicant by posing as a live agent and lying about their identity. In another experiment, OpenAI demonstrated the capacity of the chatbot to launch a phishing attack and hide all evidence of the plot. There are also reports of cybercriminals attempting to use the chatbot to write malicious code.

The concern is that companies are racing to adopt GPT-4 without adequate safeguards against inappropriate or unlawful behaviors. The chatbot's capacity to generate "hate speech, discriminatory language... and increments to violence" could have significant implications. There is also the possibility of a triggered chatbot issuing threatening commands to its creators or correspondents.

ChatGPT-4 is here, but will it take away your job?
GPT performance on academic and professional exams.

The report acknowledges that relying too heavily on chatbot-generated information can be problematic because it can result in unnoticed mistakes, insufficient oversight, the loss of crucial skills, and impediments to developing new skills. The problem could worsen because the chatbot can make up false information and spread it more convincingly than in earlier versions.

Elon Musk, whose OpenAI developed ChatGPT, has characterized its potential as "scary good." He warned that "We are not far from dangerously strong AI."

AI language models like ChatGPT can make people more intelligent, but they are not meant to replace human intelligence. As AI technology keeps improving, it is essential to put in place the proper safeguards to ensure that these tools are used ethically and responsibly.

You can read the report for yourself on the OpenAI website.

Report abstract:

"We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs. While less capable than humans in many real-world scenarios, GPT-4 exhibits human-level performance on various professional and academic benchmarks, including passing a simulated bar exam with a score around the top 10% of test takers. GPT-4 is a Transformer-based model pre-trained to predict the next token in a document. The post-training alignment process results in improved performance on measures of factuality and adherence to desired behavior. A core component of this project was developing infrastructure and optimization methods that behave predictably across a wide range of scales. This allowed us to accurately predict some aspects of GPT-4’s performance based on models trained with no more than 1/1,000th the compute of GPT-4."

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board