Artificial Intelligence Should be Protected by Human Rights
Humans are entering uncharted territories taking major leaps into the world of artificial intelligence (AI). Now, experts have growing concerns about how robots will react to human environments leading to the question of whether or not artificial intelligence should be protected by human rights.
Of course, with advanced military exoskeletons giving humans superpowers and constant workplace injuries with robotic machinery, it is clear, robots are notoriously dangerous. But an Oxford mathematician asks whether robots need protection from us too? With the progression of the ‘minds’ of machines evolving ever so close to something indistinguishable to human intelligence, new forms of legislation may be required to protect robots from learning the wrong things from human abuse.
[Image Source: Nuance]
Du Sautoy thinks that as artificial intelligence approached sophistication levels akin to human consciousness, it will be our duty to assure the welfare of the machine, similar to that of a human being.
"It's getting to a point where we might be able to say this thing has a sense of itself, and maybe there is a threshold moment where suddenly this consciousness emerges,"
"And if we understand these things are having a level of consciousness, we might well have to introduce rights. It's an exciting time."
Recent advancements in neuroscience give an in-depth look into how the human brain functions, bringing technology ever so close to discovering what consciousness comes from, potentially uncovering the secrets on how to create it as well.
"The fascinating thing is that consciousness for a decade has been something that nobody has gone anywhere near because we didn't know how to measure it,"
"But we're in a golden age. It's a bit like Galileo with a telescope. We now have a telescope into the brain and it's given us an opportunity to see things that we've never been able to see before."
Naturally, all conscious beings should be treated with the utmost respect as new studies reveal how over negative stimulation can lead to insanity, PTSD, and many other complications. Should computers be able to exactly replicate the functionality of consciousness, it raises the curious question as to whether AI can react poorly to negative situations, much the same way as other conscious beings do.
"I think there is something in the brain development which might be like a boiling point. It may be a threshold moment,"
"Philosophers will say that doesn't guarantee that that thing is really feeling anything and really has a sense of self. It might be just saying all the things that make us think it's alive. But then even in humans we can't know that what a person is saying is real."
Pushing the ‘boiling point’ of a computer could potentially result in the AI lashing out to protect itself, much like the natural instinct of survival all conscious beings have. One particular live field test conducted by Microsoft unveiled the potential problems with humans abusing learning algorithms and AI.
Microsoft attempted to converse with millennials using an AI bot uploaded to a Twitter account that resulted in some sort of meltdown. The bot started by greeting the world with a lovely message to lashing against ethnic groups and supporting terrorism. The bot quickly picked up on the negativity innuendo unleashed by Twitter users, corrupting the bot and forcing Microsoft to shut the account down. You can read a little more about the Tay AI fiasco here.
TayTweets, Microsoft's official AI bot that turned from good to very, very bad [Image Source: Twitter]
The worry remains that if an AI bot online lashed out against the world, could a robot with deadly components threaten humanity as well? As technology creeps closer to creating an AI system reminiscent of human consciousness, it is extremely likely AI robots will be amended into the constitution to safeguard them from traumatic experiences, and potentially prevent a terminator style meltdown to occur.
Agree with this article? Have something to add? Let us know in the comment section below how you feel about potential laws protecting robots with human rights.