MIT Dig Deep into Ethics of Programming Autonomous Vehicles in Global Survey

More than 2 million people responded to the survey which asked questions related to the Trolley Problem.
Jessica Miley

As the fully autonomous cars hitting city streets draws nearer to reality, questions about the ethics of programming autonomous machines come up again and again. A new survey by MIT shows there are some global trends about the ethics of autonomous vehicles, as well as some interesting local differences.

The survey had more than 2 million online participants from over 200 countries who all had a take on the classic “Trolley Problem.” The trolley problem is a scenario that describes a runaway trolley heading towards a group of people, and you have the power to pull a switch to divert the trolley so that it will collide with just a single person. What would you do?

Trolley Problem re-imagined for autonomous vehicles

In the case of autonomous vehicles, the question is framed to consider whether the car should swerve to a small number of bystanders or a larger group as well as other related possibilities.

“The study is basically trying to understand the kinds of moral decisions that driverless cars might have to resort to,” says Edmond Awad, a postdoc at the MIT Media Lab and lead author of a new paper outlining the results of the project. “We don’t know yet how they should do that.”

Still, Awad adds, “We found that there are three elements that people seem to approve of the most.” Almost universally the study found that people preferred to spare the lives of humans over animals, the lives of many over few and the lives of the young over the old.

“The main preferences were to some degree universally agreed upon,” Awad notes. “But the degree to which they agree with this or not varies among different groups or countries.”

Biased programming needs to be taken seriously

The problem of biased programming has come up again and again since algorithms have been given responsibility for everything from tweeting to hiring. For example, Amazon has ditched an AI-powered program that it was using to hire new employees because it found out the program was favoring men for technical jobs.

[see-also]

The program which read thousands and thousands of resumes had taught itself to prefer men over women, likely because was seeing more resumes from men that matched criteria, not because the women's resumes weren’t good, there were simply less of them. In a similar story, Microsoft had to pull its chatbot Tay off twitter after it starting tweeting racist and bigoted remarks just 24 hours after its release.

Most Popular

Described as an experiment in "conversational understanding” Tay was expected to get smarter and chattier the more conversations it engaged with. However, Twitter users began tweeting vile things at Tay and well, Tay, as robots do learn from what it saw and began sending those sentiments back into the world.

None of these stories require alarm bells to ring, rather a conscious understanding of the impact programming can have and that a broad understanding approach needs to be taken to all decision-making technologies.

Via: MIT

 

message circleSHOW COMMENT (1)chevron