Human Q&A with AI Displays Computer-Generated Thinking Weaknesses
You've most likely watched Jeopardy! or some such trivia game show, where opponents standing behind bright podiums have to answer trivia questions as fast and as accurately as possible. These are great fun, not to mention educational.
Now imagine a human vs. a computer system playing against each other, and the clock.
It seems like the results may end up skewed in the artificial intelligence's favor, and you would be right.
RELATED: WHAT IS EXPLAINABLE ARTIFICIAL INTELLIGENCE AND IS IT NEEDED?
However, a group of researchers from the University of Maryland (UMD) put together 1,213 questions, working alongside humans and machines, in order to see the computers' answers, and realized they could relatively easily stump them by tweaking the questions slightly.
The modern world of virtual assistants
AI still has a way to go in terms of fully understanding human language. All we need to do is speak with Siri to know that the virtual assistant doesn't always pick up exactly what humans are saying.
The same goes for most artificial intelligence computer systems that listen to and interact with humans.
That said when IBM's Watson computer played Jeopardy! against humans back in 2011, it was a clear win for the computer.
This is what prompted the group of researchers from the University of Maryland to create over 1,200 questions - questions easy for humans to answer - that ended up stumping computer systems.
Currently, no computer system can yet answer these correctly.
What intrigued the researchers was exactly how computers come up with their answers, and not the answers themselves. Understanding what the computers actually understood was their goal.
"Most question-answering computer systems don't explain why they answer the way they do, but our work helps us see what computers actually understand," said Jordan Boyd-Graber, associate professor of computer science at UMD and senior author of the research.
Boyd-Graber continued "In addition, we have produced a dataset to test on computers that will reveal if a computer language system is actually reading and doing the same sorts of processing that humans are able to do."
Either humans or computers create all the questions that computers answer, not a combination
By combining humans and computers to create the questions, and not simply humans or exclusively computers, Boyd-Graber and his team were able to build a computer interface that displays what computers are thinking as humans write the questions down.
Then, the human can edit his or her questions to place the computer in a tricky situation.
In order to test their work, the team of researchers put computers and humans to the test. They placed junior varsity high school trivia teams, as well as Jeopardy! champions up against computers with these questions.
Will computers ever really master language? First they’ll need to learn to answer questions that target their weaknesses. @UMDCS and @UMIACS’s Jordan Boyd-Graber & his team have developed 1,213 questions to do just that. Read more about it here. https://t.co/FPYKwqSQbC— UMD Science (@UMDscience) August 6, 2019
Every human trivia team won, even those that didn't score very high points. No computer team won.
"For three or four years, people have been aware that computer-answering systems are very brittle and can be very easily fooled," said co-author of the paper and UMD computer science graduate student, Shi Feng.
Feng carried on to say "But this is the first paper we are aware of that actually uses a machine to help humans break the model itself."
An interview with Dr. Birgül Akolpoglu allows IE to dig deeper into the potential, limitations and misconceptions of biohybrid microrobots for medical use.