Why We Trust Algorithms More Than Other Humans
There comes a time when building playlists or choosing the right shoe size becomes more than online shopping, as algorithms start to streamline every aspect of our consumerist lives.
Concerns surrounding the growing intrusiveness of algorithms in daily life are mounting, but people still think computer programs are more trustworthy than their fellow humans — especially when the task is challenging — according to a new study published in the journal Scientific Reports.
Counting as a 'fundamental test' for testing trust in algorithms
"Algorithms are able to do a huge number of tasks, and the number of tasks that they are able to do is expanding practically every day," said Eric Bogert, a doctoral student at the Terry College of Business Department of Management Information Systems. "It seems like there's a bias towards leaning more heavily on algorithms as a task gets harder and that effect is stronger than the bias towards relying on advice from other people."
Bogert worked with Professor Rick Watson of management information systems, along with Assistant Professor Aaron Schecter on the new study, which involved 1,500 participants who evaluated photographs as part of a larger scientific body of work studying the ways — like when and how — people interact with algorithms to process information, and arrive at decisive conclusions.
This study asked volunteers to count how many people were in a photo of a crowd — and offered suggestions created by a group of other people vs. suggestions generated by a computer algorithm.
As the crowd became more populous and the photograph expanded, counting began to challenge the participants, who were increasingly likely to take the algorithm's word instead of counting on other humans — or even their own counting skills, said Schecter in a TechXplore report. He also said the use of counting as a trial risk was crucial because the number of people in each photo provides a task whose difficulty is difficult to deny as truly objective. Counting is a kind of task that laypeople typically expect computers to handle with ease.
"This is a task that people perceive that a computer will be good at, even though it might be more subject to bias than counting objects," said Schecter. "One of the common problems with AI is when it is used for awarding credit or approving someone for loans. While that is a subjective decision, there are a lot of numbers in there — like income and credit score — so people feel like this is a good job for an algorithm."
Human bias compromises facial recognition and hiring algorithms
"But we know that dependence leads to discriminatory practices in many cases because of societal factors that aren't considered," added Schecter. Facial recognition and hiring algorithms, too, have been subject to more controversy in recent years — since their use has shown several cultural biases worked into their systems — which can create erroneous results when matching faces to the identities of real people — or even mistakenly eliminating fully-qualified job candidates, Schecter said.
Biases like this don't really show up in basic tasks like counting, but the presence of systemic bias in other broadly trusted algorithms serves to demonstrate why it's critical to grasp why people prefer relying on algorithms instead of other humans, added Schecter.
"The eventual goal is to look at groups of humans and machines making decisions and find how we can get them to trust each other and how that changes their behavior," explained Schecter. This field of study is incredibly nascent, and — beyond the science — has profound ethical implications for everyone in the modern world, which makes it unspeakably complicated. That's why Schecter and his team are "starting with the fundamentals."