Way back in 1942, science fiction author Isaac Asimov proposed his famous Three Laws of Robotics in a short story entitled “Runaround”:
1.) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.) A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3.) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Despite the enduring influence of these tenets, there’s nonetheless a push underway to give robots what’s been termed “lethal autonomy” – that is, the ability to kill without direct human involvement. Killing by algorithm. That’s no longer science fiction. Not only has it become technologically possible but increasingly likely to occur, if not here, then overseas. For some, the advantages of automation in human conflict are just too great a temptation. That’s a fundamental shift that could very well change our geopolitical landscape.
See on blogs.wsj.com