This AI Could Make Better Deals and Compromises Than Humans Can

An international team of researchers partnered together to develop an algorithm that could make robots more compassionate and better at compromising than their human creators.
Shelby Rogers
The KUKA robotics group tested its robot's prowess against human players in 2016.KUKA Robot Group/YouTube

Artificial intelligence systems seem to constantly make headlines by one-upping humanity. They can play better chess than us. They can create art faster than we can. AI can even make music in record time that could be aired on radio stations. But they've always seemed to lack in those elements that make humanity tick -- intangible differences like compassion, understanding, and feelings. 

And yet all of that might be changing, thanks to new research. 

A computer science team from Brigham Young University in partnership with MIT and other international universities just created a new algorithm that can outperform us humans in a distinctly "human" activity -- compromising. 

BYU computer science professors Jacob Crandall and Michael Goodrich developed the new system. The pair's research proved that compromise among machines won't just be possible; it could be better than humans. 

"The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills," said Crandall, whose study was recently published in Nature Communications. "AI needs to be able to respond to us and articulate what it's doing. It has to be able to interact with other people."


The researchers developed the algorithm called S# and programmed machines with the algorithm. They then pitted the machines in head-to-head two-player games in order to observe certain relationships. The BYU team observed machine v machine, machine v human, and human v human to measure levels of understanding and attempts to compromise. In nearly all instances, the machines programmed with the new algorithm found the best solutions to benefit both players better than their human counterparts. 

"Two humans, if they were honest with each other and loyal, would have done as well as two machines," Crandall said. "As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It's programmed to not lie, and it also learns to maintain cooperation once it emerges."

Most Popular

Machines playing games fairly? That doesn't seem entirely realistic and don't worry. The researchers also realized that playing games not to win but to break even defeated the purpose of competition. However, in order to give a semblance of realism, the researchers programmed the machines with trash talk phrases to say whenever they felt betrayed by their opponent. These ranged from "Curse you!" to "You will pay for that!" and an impressive "In your face!" If the machines found an action to be beneficial to both players, the machine would also give encouraging responses like "Sweet. We are getting rich," or a very conservative "I accept your last proposal."

Ultimately, Crandall noted that he hopes the research could mean nicer machines and maybe even nicer humans. 

"In society, relationships break down all the time," he said. "People that were friends for years all of a sudden become enemies. Because the machine is often actually better at reaching these compromises than we are, it can potentially teach us how to do this better."

message circleSHOW COMMENT (1)chevron