Someone recently wrote to Mike Resnick (and Resnick quoted in his “Ask Bwana” column in Speculations) about Asimov's robot stories. The context is irrelevant; the line I'm reacting to was the questioner saying:
[Asimov's] concepts—particularly those of the three laws—are so broadly accepted that any future robotic stories that don't take them into consideration (if only to establish a framework where they don't apply), does so at the peril of being considered naive or worse.
[Added later: That was from the original question, not the answer. In other words, this isn't what Resnick said, it's what the letter-writer said.]
I confess that I'm a little baffled by this notion. It seems to me that Asimov's Three Laws of Robotics are thought-experiments with no bearing on the real world, and thus that stories that attempt to portray robots in any sort of realistic way should ignore them. How would one go about programming a computer, even a sentient one, to be incapable of performing any action that would cause harm to a human? Humans are frequently incapable of figuring out whether their actions and inactions will cause harm to other humans; how's a robot supposed to know? Actions and inactions have unpredictable consequences that go far beyond the immediate and obvious ones. And in fact, Asimov's robot stories were often about the ways in which the Three Laws failed, were subverted, or resulted in unintended consequences; they weren't, by and large, good arguments in favor of the Three Laws. A character in The Caves of Steel apparently says, “A robot must not hurt a human being, unless he can think of a way to prove it is for the human being's ultimate good after all.” (I haven't read Caves of Steel, though, so I can't confirm this.)
Roger Clarke's fascinating 1993-1994 article “Asimov's Laws of Robotics: Implications for Information Technology” notes: “At first sight, Asimov's laws are intuitively appealing, but their application encounters difficulties.” Clarke later comments:
The intuitive attractiveness and simplicity [of the original version of the Three Laws] were progressively lost in complexity, legalisms, and semantic richness. Clearly then, formulating an actual set of laws as a basis for engineering design would result in similar difficulties and require a much more formal approach. Such laws would have to be based in ethics and human morality, not just in mathematics and engineering. Such a political process would probably result in a document couched in fuzzy generalities rather than constituting an operational-level, programmable specification.
Point being, the Three Laws are an interesting gedankenexperiment and an interesting basis for a certain kind of story, but they don't make much sense for use in a realistic story about realistic robots. “Do no harm” is great as a guiding principle—I'm quite fond of it myself—but impossible to follow as an unbending rule.
(In my opinion, the inaction clause is particularly insidious. Each robot would have to monitor the food intake of all nearby humans and make sure they didn't eat anything bad for them. Each robot would have to stop all nearby internal-combustion vehicles, because the emissions harm humans, but then would have to create other means of travel, because preventing an ambulance from traveling may result in harm to someone. And so on.)
So if I see robots in fiction that purport to obey the Three Laws, I'm going to be very very dubious, unless they're being used as Asimov used them, to perform gedankenexperiments or explore loopholes.
I wrote most of the above a couple weeks ago; I was reminded to finish and post it by seeing a preview for the upcoming I, Robot movie, starring Will Smith and directed by Alex Proyas (director of The Crow and Dark City). I'm a little dubious about the writers—I'm guessing it's been heavily script-doctored—but the preview, a fake ad for a new robot model (see the I Robot Now site), was kind of fun and stylish. We'll see.