Three laws for the robot-kings

Someone recently wrote to Mike Resnick (and Resnick quoted in his “Ask Bwana” column in Speculations) about Asimov's robot stories. The context is irrelevant; the line I'm reacting to was the questioner saying:

[Asimov's] concepts—particularly those of the three laws—are so broadly accepted that any future robotic stories that don't take them into consideration (if only to establish a framework where they don't apply), does so at the peril of being considered naive or worse.

[Added later: That was from the original question, not the answer. In other words, this isn't what Resnick said, it's what the letter-writer said.]

I confess that I'm a little baffled by this notion. It seems to me that Asimov's Three Laws of Robotics are thought-experiments with no bearing on the real world, and thus that stories that attempt to portray robots in any sort of realistic way should ignore them. How would one go about programming a computer, even a sentient one, to be incapable of performing any action that would cause harm to a human? Humans are frequently incapable of figuring out whether their actions and inactions will cause harm to other humans; how's a robot supposed to know? Actions and inactions have unpredictable consequences that go far beyond the immediate and obvious ones. And in fact, Asimov's robot stories were often about the ways in which the Three Laws failed, were subverted, or resulted in unintended consequences; they weren't, by and large, good arguments in favor of the Three Laws. A character in The Caves of Steel apparently says, “A robot must not hurt a human being, unless he can think of a way to prove it is for the human being's ultimate good after all.” (I haven't read Caves of Steel, though, so I can't confirm this.)

Roger Clarke's fascinating 1993-1994 article “Asimov's Laws of Robotics: Implications for Information Technology” notes: “At first sight, Asimov's laws are intuitively appealing, but their application encounters difficulties.” Clarke later comments:

The intuitive attractiveness and simplicity [of the original version of the Three Laws] were progressively lost in complexity, legalisms, and semantic richness. Clearly then, formulating an actual set of laws as a basis for engineering design would result in similar difficulties and require a much more formal approach. Such laws would have to be based in ethics and human morality, not just in mathematics and engineering. Such a political process would probably result in a document couched in fuzzy generalities rather than constituting an operational-level, programmable specification.

Point being, the Three Laws are an interesting gedankenexperiment and an interesting basis for a certain kind of story, but they don't make much sense for use in a realistic story about realistic robots. “Do no harm” is great as a guiding principle—I'm quite fond of it myself—but impossible to follow as an unbending rule.

(In my opinion, the inaction clause is particularly insidious. Each robot would have to monitor the food intake of all nearby humans and make sure they didn't eat anything bad for them. Each robot would have to stop all nearby internal-combustion vehicles, because the emissions harm humans, but then would have to create other means of travel, because preventing an ambulance from traveling may result in harm to someone. And so on.)

So if I see robots in fiction that purport to obey the Three Laws, I'm going to be very very dubious, unless they're being used as Asimov used them, to perform gedankenexperiments or explore loopholes.

I wrote most of the above a couple weeks ago; I was reminded to finish and post it by seeing a preview for the upcoming I, Robot movie, starring Will Smith and directed by Alex Proyas (director of The Crow and Dark City). I'm a little dubious about the writers—I'm guessing it's been heavily script-doctored—but the preview, a fake ad for a new robot model (see the I Robot Now site), was kind of fun and stylish. We'll see.

5 Responses to “Three laws for the robot-kings”

  1. Arthur D. Hlavaty

    John Sladek wrote a hilarious story, called “Broot Force,” about some of the things that could go wrong with the Three Laws.

    reply
  2. David Moles

    Seems like Resnick’s comments really only apply to stories about robots as imagined in the 50s, anyway. From a contemporary SF writer’s perspective, we’ve seen so many variations on AI and cyborgs and uploads and androids and factory automation and whatnot that I’m not sure the word robot means much any more, anyway, except as a cultural construct.

    reply
  3. David Moles

    P.S. Anyway, Jed, you really ought to add a preview to feature to this thing, anyway. 🙂

    reply
  4. Jed

    Thanks, Arthur! That was the one by “Iclick As-I-Move,” wasn’t it? I love that set of Sladek parodies.

    David: Just to be clear, I should note that the comment I quoted wasn’t from Resnick; it was from someone asking Resnick a question. Agreed that the question seems to be coming from a somewhat old-fashioned view of robots in sf—I don’t think I’ve seen a story that features robots per se in a long time, much less one that uses the Three Laws or anything like them. I think that’s part of why I was so startled by the comment; it’s not the only time I’ve recently seen someone talk about the Three Laws as if they’re universally assumed in sf stories, and I’m not sure where that idea comes from.

    Re preview feature: you mean previewing comments before posting ’em? Yeah, I should do that. Or, better, provide the edit-for-up-to-an-hour feature that JournalScape has; I’m pretty sure I know how to code it, just have to sit down and do it. Some day.

    reply
  5. David Moles

    On a side note, the mostly-bad A.I. did, I think, get one thing right: the only real killer apps for humanoid robots are sex and parenting.

    reply

Join the Conversation

Click here to cancel reply.