Each day, Artificial Intelligence systems progressively get more human. These AI robots conjure up countless pop culture references - from HAL in 2001: A Space Odyssey to Terminator to I Robot. And in nearly every scenario involving AI, it doesn't go well for the humans. The AI technology currently available doesn't show any signs of overthrowing the establishment. However, some experts are saying we can't be too careful.
A team of researchers pieced together the Asilomar AI Principles - 23 principles for future AI development and integration into society. Basically, these guys made a list of dos and don'ts so humans don't get shredded by AI robots. Each researcher - ranging from robotics engineers to algorithms experts - contributed their own unique principle ideas for the list.
[Image Source: Pixabay]
Sound a little crazy? Sound like too much work? Tesla's Elon Musk and physicist Stephen Hawking endorsed the list.
The writers subdivided the list into three sections: Research Issues, Ethics and Values, and Longer-term. Main points of the list include:
- Not using AIs as a new autonomous arms race. (principle 18)
"I consider that one of the greatest dangers is that people either deal with AI in an irresponsible way or maliciously — I mean for their personal gain," said Yoshua Bengio, University of Montreal professor and head of the Montreal Institute for Learning Algorithms.
"And by having a more egalitarian society, throughout the world, I think we can reduce those dangers. In a society where there’s a lot of violence, a lot of inequality, the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question."
- AI systems' access to personal data shouldn't hinder someone's freedom. (principle 13)
Guruduth Banavar, Vice President of IBM Research, said individuals should always have the upper hand when it comes to their data and information:
" It creates a persona for the individual or institution – personality traits, emotional make-up, lots of the things we learn when we meet each other. AI will do that too and it’s very personal. I want to control how [my] persona is created. A persona is a fundamental right."
- There needs to be a healthy relationship between AI researchers and government officials. (principle 3)
Granted, the writers understand the list doesn't include everything. They're more guidelines, they said, rather than restrictions.
"From this list, we looked for overlaps and simplifications, attempting to distill as much as we could into a core set of principles that expressed some level of consensus," they said. "But this “condensed” list still included ambiguity, contradiction, and plenty of room for interpretation and worthwhile discussion."
The team isn't the only set of voice getting notoriety over AI discussions. In 2014, Elon Musk famously likened AI development to "summoning a demon." Last October, Hawking called AI either "the best or worst thing" that would ever happen to humanity.
"I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful," said Musk to The Guardian. "I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish."
For the full research paper, see the Future of Life website here.