Bentham, Hobbes, and The Ethics of Artificial Intelligence

Dusty, old philosophical frameworks are becoming much more relevant as the ethics of artificial intelligence comes into focus.
John Loeffler
1, 2

If you’ve been paying attention to the news at all, you would think that the coming artificial intelligence revolution was a foregone conclusion, but many of these pronouncements tend to give a cursory acknowledgment of the ethics of artificial intelligence as if these are simply academic exercises that have no real bearing on the development of AI.

Those on the cutting edge of AI development know better. Autonomous vehicle makers are carefully managing the roll-out of autonomous vehicles knowing full well that a hasty introduction of these systems could lead to a public backlash that can shut the whole thing down. If you have any doubt about that, look no further than the debate around embryonic stem cells research in the United States for a recent example of political consideration trumping science.

So what are the controversies surrounding the ethics of artificial intelligence? Several of the major ethical frameworks, such as the Utilitarianism of Jeremy Bentham and Thomas Hobbes’ Social Contract Theory, have some important things to say about the development of artificial intelligence. And—like all good philosophy—there are no easy answers.

The Greatest Good For the Greatest Number

Jeremy Bentham
Source: Public Domain / Wikimedia Commons

In 1789, the British philosopher and social reformer Jeremy Bentham published An Introduction to the Principles of Morals and Legislation, one of the most important works of philosophy written since Aristotle.

In it, Bentham laid out the principle of utility, which argues that good is defined as what is pleasurable and that bad is defined by what is painful; and that what is most ethical is delivering the greatest good for the most number of people or preventing the greatest harm to the greatest number.

This is a very common argument employed in favor of artificial intelligence. Autonomous vehicles, it’s often pointed out, have the potential to prevent a million traffic fatalities every year by eliminating the human driver from the equation—whose error is nearly always the reason for the accident.

Tesla
Source: Tesla

This comes at the cost of millions of transportation jobs around the world that will be lost as truck drivers, taxi drivers, and others are replaced by AI systems, many of whom will find it difficult if not impossible to retrain for other jobs, causing widespread hardship throughout society.

Most Popular

Likewise, in manufacturing, millions of jobs have been replaced by automation already and that number will only increase as AIs are able to perform more sophisticated tasks at a vastly reduced cost to the manufacturer.

SEE ALSO: THE RISE OF AI AND EMPLOYMENT: HOW JOBS WILL CHANGE TO ADAPT

This will invariably lead to lower prices for consumer products for everyone in society but at the cost of depressed wages that may not correlate to the decrease in prices; there’s no guarantee that the lower wages will be able to purchase the same standard of living as before.

In both these cases—as with many others where artificial intelligence is involved—, trying to identify the greatest good is a thorny issue, but one that everyone ought to have their input heard. That, however, isn’t always the case.

What is the Social Contract?

Thomas Hobbes
Source: Public Domain / Wikimedia Commons

One of the most famous works of political and moral philosophy is Thomas Hobbes’ Leviathan, where the British political theorist explores how societies coalesced out of what he famously termed “the state of nature.”

The state of nature, according to Hobbes, is man against all—every man or woman was entirely responsible for their own lives in their entirety and had to fight against every other person in a struggle for basic survival.

[see-also]

Every person had to be the carpenter who built their shelter, the farmer who grew their food, and the warrior who protected it all from the next person down the trail who would kill you and take everything from you so that they could survive instead.

Famously, Hobbes described life in the state of nature as “solitary, poor, nasty, brutish, and short.”

In response to this reality, Hobbes says, societies formed where mankind cooperated instead of competed. The farmer grew the food for the community and fed the carpenter who built his own shelter as well as the shelter for the farmer and the warrior, who protected all three of them.

In this way, known as the Social Contract, humanity was able to develop specialized skills and advance technologically, and socially evolve beyond mere animals.

Does AI Breach the Social Contract?

What if someone abuses this arrangement? What if someone has his home built and his food grown for him, but instead uses his skills not to help his community, but to enrich himself at the community’s expense?

Without the work of the broader community to enable a safe environment for the researchers at Caltech and Google to build their AI systems, these systems would not be possible. Mark Zuckerberg could not have founded Facebook if he had to work in the fields to grow his food or if he had to personally secure Facebook’s campus every night from roving bands of marauders.

If the developers of these AI systems turn around and introduce systems that make the contributions of the rest of society redundant—who needs a truck driver when an autonomous vehicle can do the same work and never complain about work hours or wages—, there is a very real danger that those put out of work will sense that the social contract has been egregiously breached and their reaction [PDF] can be very destabilizing.

Luddite
Source: Public Domain / Wikimedia Commons

History has no shortage of leaders who underestimated the wrath of an angry populace and lost centuries-old empires as a consequence. It isn’t enough to call an enraged mob of administrators an uneducated bunch of Luddites while they smash up your sophisticated quantum computer that has put them all out of work.

You could be 100% correct but you will still have your qubits destroyed—if you’re lucky.

Many of our leaders recognize this, which is why some of the biggest proponents of a Universal Basic Income are in the tech industry. It remains to be seen whether the politics of such programs can be ironed out before sophisticated AIs force the issue into a crisis point, or even that such a system would be effective.

We Can Have an Ethical AI Society

Beach
Source: Martijn Meijerink / SkitterPhoto

While there are pitfalls, there are also promising developments thanks to AIs. The trade-off between people’s safety and people’s jobs doesn’t have to produce a definitive loser. We can have machines do all of our work for us and actually fulfill the dreams of many philosophers and economists who have been prophesying the End of Work for centuries now.

If preventing the greatest harm is a moral imperative, and we can end the suffering brought on by compulsory labor on pain of starvation than this is a definite social good that must be pursued, but only if we approach the coming revolution in AI ethically. Otherwise, Thomas Hobbes has a lot to say about the alternative.

message circleSHOW COMMENT (1)chevron