He's rather inscrutable, what's his intent?
From the US Office of Naval Research:
A very thorough treatise on the morality of creating battlefield robots, it covers the evolution of the philosophy on what a robot should and shouldn't be able to do, and discusses the realities of modern robotics versus literary experiments such as Asmiov's Three Laws. An attempt by Roger Clarke to fix the laws is presented; it makes for a fun read and food for thought if you like double-checking the logic of such things.
- The Meta‐Law: A robot may not act unless its actions are subject to the Laws of Robotics.
- Law Zero: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.
- Law One: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate a higher‐order Law.
- Law Two: A robot must obey orders given it by human beings, except where such orders would conflict with a higher‐order Law; a robot must obey orders given it by superordinate robots, except where such orders would conflict with a higher‐order Law.
- Law Three: A robot must protect the existence of a superordinate robot as long as such protection does not conflict with a higher‐order Law; a robot must protect its own existence as long as such protection does not conflict with a higher‐order Law.
- Law Four: A robot must perform the duties for which it has been programmed, except where that would conflict with a higher‐order law.
- The Procreation Law: A robot may not take any part in the design or manufacture of a robot unless the new robot’s actions are subject to the Laws of Robotics.
These laws have some pretty amazing assumptions built into them:
- A development team will exercise care that the laws are not violated.
- A development team will feel bound by the laws.
- The development team possesses some sort of software validation methodology that will let them state unequivocally that for sure, no possible combination of program state transition and input will cause a law to be violated.
- The robot's programming can't be modified by a never-do-well.
- The robot's programming can't change by some other means.
The paper proposes that a slave morality be imposed upon robots, so that the responsibility for their actions lies with those who deployed them, and that autonomy be denied robots. Given the difficulty of creating a complex, large scale software application, the likelihood of unexpected interactions between software components, and the inherent hackability of technology,
Such a robot would have the potential to leave behind its imposed slave morality and become autonomous in the Kantian sense: the programmer of its own self and own goals, or the maker of its own destiny. Not only would such robots pose incredible risks to humans in the possibility of rampancy, but they would also be undesirable from a military ethics and responsibility perspective: they would then move moral responsibility from the commanding officer to the robot itself. ... So for the foreseeable future, we solve both the problems of risk and responsibility by requiring a slave morality
... But there is worry that as robots become ‘quasi‐persons’, even under a ‘slave morality’, there will be pressure to eventually make them into full‐fledged Kantian‐autonomous persons, with all the risks that entails... The only guarantee of avoiding this outcome appears to be a prohibition on programming robots with anything other than a ‘slave morality’, i.e., simply not allowing a Kantian‐autonomous robot to ever be programmed or built (though such bans, especially when applied internationally, have been notoriously difficult to enforce).
A number of conclusions and recommendations are presented in the paper, which I encourage you to review. I might summarize it as "oh dear, the genie is out of the bottle, luckily we have a few years to worry about it." In the words of the paper:
[Autonomous robots] raise novel ethical and social questions that we should confront as far in advance as possible—particularly before irrational public fears or accidents arising from military robotics derail research progress and national security interests.