Discover how ChatGPT-4 is better than GPT-3.5
OpenAI took the world by storm late last year when it introduced Chat Generative Pre-trained Transformer - dubbed ChatGPT - an artificial intelligence-powered chatbot that allows its users to have human-like conversations. And what’s more, the company launched a new model - ChatGPT4 - in a bid to ‘scaling up deep learning.’
But how does it fare in performance to its previous version - GPT-3.5?
The latest addition to OpenAI’s series of chatbots, GPT-4 is much more reliable, creative, and is able to handle more nuanced instructions than GPT-3.5 in a casual conversation, according to OpenAI’s own report.
In an experiment to demonstrate GTP-4’s supremacy, the company tested both versions in a simulated bar exam that was originally designed for humans. While GPT-4 passed the exam with a score in the top 10 percentile, GPT-3.5’s score was around the bottom 10%, indicating that the successor is much more intelligent than its predecessor.
In the other publicly-available tests, GPT-4 either outperformed or performed equally with GPT-3. For example, in the SAT Evidence-based Reading & Writing, GPT-4 ranked in the 93rd percentile, while GPT-3 had an 87th percentile score. In another example of its brilliance, GPT-4 ranked in the 80th percentile in the Graduate Record Examination (GRE) Quantitative, while GPT-3 fared in the low 25th.
In one of the biggest changes, the company claims that GPT-4 can now accept image prompts - GPT-3.5 could only accept text prompts. This means that the chatbot can now read the image, understand the context and emit textual answers. Unlike GPT-3, GPT-4 is a language as well as a visual model.
It’s also now harder for users to trick the new AI chatbot. According to Trusted Reviews, GPT-4 has been trained to handle malicious questions. Because of this, it’s now better at giving factual information and has better-advanced reasoning capabilities than GPT-3.
One of the key drawbacks that OpenAI, by its own admission, has still not been able to figure out completely is that it still ‘hallucinates’ just like its previous versions. In AI lingo, hallucinations mean the ‘tendency to produce nonsensical and untruthful information confidently’. Meaning it is still not fully reliable. The company warns that users should take great care when using language model outputs, particularly in high-stakes contexts.
OpenAI claims to have significantly reduced hallucinations, in contrast to previous models.
Another drawback is that GPT-4, like its predecessors, lacks knowledge of events that have occurred after September 2021 and does not learn from its experience.
The company, in its release statement said, “We’ve also been using GPT-4 internally, with great impact on functions like support, sales, content moderation, and programming. We also are using it to assist humans in evaluating AI outputs, starting the second phase in our alignment strategy.”
Find out why the veteran leader in the environmental movement believes the future of our planet is bright.