With the release of I, Robot, everyone is talking about the terrible damage being done to the ideas of Isaac Asimov. Over at The Fulcrum, for example, Charles2 lists the three Laws of Robotics,
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law,
and wonders whether they could be applied to politics and government.
What I wonder is, what kind of lunatic thought that these laws were ever workable? Especially the first one. A robot cannot, through inaction, allow a human being to come to harm? Human beings are coming to harm all the time all over the world, and that’s only if we stick to straightforward physical harm, not to mention more subtle varieties. Every robot with these laws programmed into them would instantly launch on a frenzied quest to change the very nature of reality in order to stop all of this harm from happening. I just want something that will vacuum my floors efficiently, not save the world.
The whole point about robots (or computers more generally) is, they’re very literal-minded. They don’t know the meaning of “within reason.” When talking to each other rather than to machines, human beings are never perfectly precise about what they mean, often for good reason. That’s why we’ll always have literary critics, theologians, and the Supreme Court: to help us understand what was really being said.
I met Asimov once, when he visited my undergraduate university. They thought it would be fun to show him around the astronomy department, much to his bemusement (he was trained as a chemist). He used his advanced age as an excuse for shamelessly flirting with every attractive woman within leering distance. I wonder what he was like before his age was so advanced?