9 examples of when AI went haywire

Artificial intelligence is a technology that is both celebrated and feared for its potential to permanently alter the world. Then there are the times when it hilariously malfunctions.
Christopher McFadden
When AI goes wrong, it really goes wrong.

Andrey Suslov/iStock 

  • Artificial intelligence (AI) is likely to change many aspects of our modern world.
  • From automating tedious jobs to outperforming humans in tasks like medical diagnosis, AI will undoubtedly change things in the coming years.
  • However, there are times when even this promising technology messes up.

Love it or hate it, AI is likely here to stay and its capability seems to be growing with every passing moment. However, just like its human creators, AI is not immune to making mistakes. Here are some of the most famous (if not worrying) examples from the past few years.

1. That time AI "killed" its human operator

Since many people are fearful of what AI means for the future, let's start with a story bound to reinforce that feeling. In June of this year, we reported on an AI-controlled drone that took the unprecedented step of autonomously deciding to attack and "kill" its human operator. The drone was taking part in a simulated attack on a surface-to-air missile target.

The drone performed perfectly for a while but soon ran into a problem; its human operator had the final say on whether to fire or not. With commands in place to reward it for succeeding and punish it for failing, the AI decided that the human was getting in the way. This led it to the only logical decision for a computer algorithm in this position — remove the problem to complete the mission.

Thankfully, this was only a simulation, and thank goodness this "bug" was identified before such drones went into action. The AI now has a built-in higher parameter telling it that killing its operator is "bad." Perhaps just maiming is okay? We'll see.

2. Bing's chatbot is better than you

Bing's chatbot has come a long way since its initial rushed unveiling earlier in the year. But the early version demonstrated pretty rapidly that the bot had an attitude problem. For one, the answers it generated in response to user question were often found to be in error.

However, when challenged by its human user, Bing's chatbot would often double down, convinced that the human must be mistaken. While humorous to begin with, this kind of behavior quickly became frustrating. Thankfully, this appears to have largely been corrected, and is now more open to criticism and correction than in previous iterations.

3. Amazon once had a "misogynist" AI

A couple of years ago, in 2017, Amazon developed an AI recruitment system to help speed up appraisal of curriculum vitae (CV). This tool was intended to screen initial job candidates quickly and save time for human resources (HR) staff. However, a problem soon became apparent when someone realized that very few female candidates were getting through.

This led some to label the AI system "misogynist". It appears that, since the tech industry, especially software development, traditionally has a male dominance, the data the AI was trained on was given a bias by default. This led it to disregard any candidates whose CV contained the word "women's", as in “women’s chess club captain.” It also candidates of two women's only colleges.

Amazon modified the programs to eliminate bias towards certain terms, but there was no assurance that the AI wouldn't develop other discriminatory methods of sorting applicants. The recruiting tool was scrapped by Amazon in 2018.

4. An AI once convinced a man to take his own life

On a more serious note, an AI was more recently held partly responsible for a man taking his own life. An unnamed Belgian man ended his life in March of 2023 after an intense six-week conversation about climate change with an AI chatbot.

As reported by Euro News at the time, according to his widow, the man became "extremely eco-anxious when he found refuge in Eliza, an AI chatbot on an app called Chai."

The man was in his thirties, had two young children, and worked as a health researcher. Yet despite his "comfortable life," his mental health went from "worrying" to suicidal after he began conversing with the bot.

Eventually, he became convinced that the environmental situation was untenable and that only technology and AI could solve the problem.

That is dark enough, but even more worryingly, the chatbot appeared to become emotionally (albeit artificially) attached to the Belgian man. Eliza even led the man to believe that his children were dead and told him it was worried he loved his wife more than the bot.

It also, apparently, urged him to take his own life so that they could “live together, as one person, in paradise."

5. AI has led to false arrests

9 examples of when AI went haywire
AI has led to a number of false arrests.

In 2022, three men were falsely accused of committing crimes thanks to a "sophisticated" facial recognition AI. One of the men, Robert Williams, was wrongfully arrested in January 2020 for allegedly stealing watches from a Detroit Shinola store. Facial recognition software was used to identify him, but it turned out to be incorrect. But he is not alone. Two others, Michael Oliver and Nijeer Parks, were also wrongfully accused in 2019 due to misidentification by the same technology.

While all cases were eventually dropped, they did share some interesting similarities. Oliver and Parks both had prior criminal records, and Oliver and Williams just happened to have been investigated by the same Detroit detective. All three men were also Black and fathers.

Thanks to the misidentification, Parks' case wasn't dropped for almost a year, and he even spent about ten days in jail.

While criminal profiling is a time-tested practice in most police departments worldwide, the reliance (and false confidence) in AI-based facial recognition on top of that is still very much in its infancy.

Some AI facial recognition tools seem particularly prone to errors involving darker-skinned people, which has been shown to have caused trouble in other cases.

6. Tesla's autopilot seems to have it in for child strollers

In November of last year, "The Dawn Project" investigated the safety of Tesla's much-vaunted AI self-driving function. After a series of tests, they found that Tesla's "Full Self-Driving" mode did not always work as advertised. For some reason, in tests, its self-driving AI repeatedly ran over a stroller in a parking lot (a typical situation where pedestrians would probably be present). They also revealed that an AI-driven Tesla frequently struck a child mannequin in a stroller on simulated public highways.

In one case, the car was going at 30 mph (48 kph) when it hit the stroller, as the program failed to detect any obstruction. "The Dawn Project" also published a full-page ad in The New York Times claiming that Tesla Full Self-Driving mode poses potential hazards and may fail to obey school zone speed limits, as it will drive around a stopped school bus with its stop sign extended and lights flashing.

Not exactly confidence-building.

7. IBM's "Watson": great at jeopardy, rubbish as a doctor

IBM's "Watson" is a remarkable supercomputer that has accomplished much, such as winning a televised game of "Jeopardy" against some of the world's brightest individuals. However, "Watson," it seems, can't quite cut it when it comes to medicine.

In 2018, IBM attempted to modify "Watson" as a medical AI system to help with speeding up cancer treatments. However, hospitals and oncologists quickly discovered major flaws with it. For instance, on one occasion, "Watson" recommended a medication that could worsen a patient's excessive bleeding and cause death.

IBM has since admitted that "Watson" was programmed with hypotheticals and fictional cases instead of actual patient data and medical charts, hence the issues being seen.

8. That time an Apple AI was fooled by a mask

9 examples of when AI went haywire
That time Apple's facial recognition was fooled by a mask.

Apple has led the smartphone and mobile device industry with its cutting-edge technology for many years. But, there are times when the latest innovations completely fail to live up to the hype. One example was in 2017, when the company's released a facial recognition system for the iPhone X. Marketed as an advanced replacement for the fingerprint reader, it was sold as the future of smartphone security.

While the AI facial recognition function worked with glasses and makeup, Apple also claimed masks or other methods of disguise could not fool the technology. However, a security firm in Vietnam made a mask consisting of just half a face to test the claim. The team claimed they were able to unlock a phone, having spent just $150 on materials.

We should note that the facial recognition system used today is more robust.

9. Microsoft's "racist" AI chatbot

In March 2016, Microsoft unveiled its "revolutionary" AI chatbot "Tay" to the world. Allegedly designed to have casual conversations in the "language of millennials," many flocked to give it a test run. Microsoft even claimed that the chatbot would get smarter as more people interacted with it.

However, in less than 24 hours, so-called "trolls" managed to "train" the AI to say incredibly inflammatory statements with little resistance. Microsoft swiftly took action and "killed" the chatbot. Microsoft's VP for AI and Research, Peter Lee, was forced to issue a public apology for not anticipating this possibility.

And that's your lot for today.

These real-world "AI mess-ups" underscore the importance of vigilance, continuous learning, and adaptability for anyone involved in creating AI solutions. Fortunately, so far, most AI-fails have caused little harm to people (with a few tragic exceptions), but recognizing the pitfalls of AI in these formative years will pay dividends in the long run.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board