Applying Human Ethics to Robots
How can we ensure that robots never harm humanity?
Robots are increasingly becoming common in everyday life. From robots that assist in blowing out fires to robots that help the elderly, it seems that robots are here to stay and, more importantly, here to help humanity.
But how do you ensure that robots only help humanity? What ethics should robots abide by? And what do you do about potential lethal robots, robots meant to be used in war?
Robot ethics is an interdisciplinary research effort that aims to understand the ethical implications of robotic technology and answer these questions for us all. Researchers from all kinds of areas are uniting forces to ponder these crucial questions and seek answers.
One way to tackle these issues is with writer Isaac Asimov’s 1940 Three Laws of Robotics. The laws are as follows:
- A robot may not injure a human being, or, through inaction, allow a human being to come to harm
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
Are these laws enough to handle the growing robotization of our societies? Or does more work need to be done to truly tackle robot ethics?