Author: Isaac Asimov
Year of Publication: 1950, but many of the stories were previously published elsewhere
Library of Congress Call Number: PZ3 .A8316 I3
I, Robot is a series of interconnected short stories in which Asimov works out his ideas about robots. I’d read two Asimov books before, Prelude to Foundation and Foundation, but this was my first foray into his famous books about robotics. As should be immediately obvious, I don’t know too much about classic science fiction, but perhaps I can learn more.
Asimov’s general theory of life seems to be that if one has enough information and processes it correctly, one can always optimize to an ideal solution. This is true of Foundation, in which Hari Seldon predicts human behavior and uses it to make an optimal society, and in I, Robot, where, by the end of the book, the Machines do the same thing, managing the entire economy. They are essentially God: They have 100% of the possible information and are able to account for and manage every detail of human life, but they do so benevolently. During most of the book, though, the robots are smaller, and more fallible, and they go wrong in various ways. However, when they do go wrong, it’s almost never the case that something is wrong with the robot itself; rather, humans have made a mistake in interacting with the robots, and the robots behave in a totally consistent manner within that particular story. In this way, then, each story reads like a little logic puzzle, with the human characters trying to get from the information that they have to the solution of the puzzle. This detective-story format is ideal for Asimov’s perspective on life as detailed above; not only do the stories assert that having the right information leads to the right decision, but they demonstrate this by asking the reader to try to draw conclusions based on the robots’ behavior and the rules that we know (the Three Laws of Robotics, which are too famous to recap here). The solution to these puzzles is not based on near-supernatural powers of observation, Sherlock Holmes-style, but rather on logic and the scientific method. Even if you can’t figure it out as a reader, the characters finally do, using methods that at least appear to be applicable by mere mortals.
The optimism and rationalism of this philosophy are both rather extreme. In real life, we never have all the information; there are simply too many factors to calculate and their relationships to each other are mysterious. In real life, too, it’s not always possible to optimize things in a way that benefits everyone. You get into political relationships among groups and conflicting interests and it becomes complicated in ways that Asimov is able to sublimely ignore. Of course, in the real world, robots also don’t work the way that he presents them. There’s no way to program a robot not to harm humans because programming has to be more specific than that, and I’m not sure how a robot would think ahead in the ways that he describes here. In any case, Asimov posits robots as an intelligence without bias, but I’ve been pretty well trained out of the belief that humans, shaped by their surrounding ideologies and biases, are able to create a system that is somehow nonideological and totally free of bias. So this is unrealistic, which is fine, but it is also philosophically suspect. I do, however, enjoy his recognition of the fact that systems that people set up often behave in ways that we might not expect, even if they do follow the pre-established rules, and his exploration of how such behavior emerges. In the meantime, of course, he has it both ways by anthropomorphizing the robots pretty heavily.
It’s interesting, too, that I see this as overly optimistic because it assumes that mastery over all facets of existence is possible and that such mastery can be carried out in a benevolent and universally useful way, whereas Asimov actually presents it at being kind of creepy, or at least assumes that humans would be uncomfortable with such a milieu because of the need to cede control to machines. Part of this, I think, has to do with the historical period in which the book was written; these days, I think many of us are fairly blasé about the ubiquity of machines performing certain tasks more accurately than humans could, but we are pessimistic about the ability of technology to improve the future in any way not limited to the benefit of the elite. Or maybe that’s just me? Maybe that’s just me.
So. From his perspective in the relatively recent past, what kind of future did Asimov imagine? There’s actually not that much information about the milieu, because he focuses mostly on people who he imagines are not especially interested in politics or other non-robotic issues of the world. In the final story, we can see that the former nations of the world have been subsumed into larger “regions” in which the focus is economics rather than politics. He sees space colonization becoming a more pressing need and scientists becoming more important in society (this last is unsurprising, as Asimov was a scientist himself!). This is very interesting, but it’s also striking to me, sixty years after he wrote, is that in the midst of all this change, Asimov assumed that the family would remain exactly the same. Thus we can recognize the family in “Robbie” as a historical artifact, an idealized 1950’s family living in a different time period, Jetsons-style. In “Liar!” there is the assumption that a thirty-eight year old female scientist is much too old to marry a thirty-five year old male scientist; the reader is not supposed to be surprised that he prefers to marry a twenty-year old who giggles. (Perhaps unrelatedly, here as in Foundation, Asimov assumes that women can only have political power in societies which are declining; the Co-ordinator of Europe mentions that Europe as a woman as Co-ordinator as she remarks on how sleepy and unimportant the region is). Really, this shows the power of that particular ideology; even while flinging himself far into the future, when almost everything about the world has changed, even though he is surely aware that there have been many other models of the family, and even though the story that was written last is copyright nineteen sixty-nine, by which time I’m sure Asimov was aware of the women’s movement, he assumes that the model to which he is accustomed will carry into the future.
(which isn’t to discount the character of Susan Calvin, because, you know, she’s pretty good.)
I wanted to look briefly at each of these stories. Yes there are spoilers.
“Robbie.” This story is the only one that isn’t connected to the rest; none of the characters from US Robots and Mechanical Men plays an important role in it. Instead, it focuses on a family that keeps a robot as a pet and some of the social stigma that comes with it. This is probably the weakest story of the collection. A lot of it is about irrational and baseless fears of harm that technology may do, harbored by those who do not understand what the technology actually does. These fears are mostly harbored by a 1950s housewife figure, who is show to be very silly, while the possibility that technology sometimes doesn’t work as it should is discounted. At the end, there is a heartwarming moment in which the robot reunites with the little girl by saving her life—maybe not so heartwarming when we reflect that literally any robot would have done exactly the same thing, whether it wanted to or not? Then again, wanting the rescue to be personal only goes back to the point about robots’ lack of bias, so maybe this story is deeper than I think. In any case, as an introduction to the collection, it is a reasonable choice because it demonstrates the three laws in a straightforward way and gives us a glimpse of the society in which they exist.
“Runaround.” Here we meet the field engineers, level-headed Powell and fiery Donovan, who are doing research on Mercury (by which I mean: they are on Mercury, doing research). It’s a fun story to read; it has both humor and danger in it and it centers on the frustration of robots not working quite as they should at exactly the wrong time, and also working slightly too well. What’s interesting here is that the laws are shown to be somewhat malleable; in the expensive robot, the third law (self-preservation) is stronger than the second law (following orders). This is interesting in view of the insistence that the laws are an insurmountable physical reality (are they?) –but the problem of the old slow robots who still respond to situations as they were programmed to do is perhaps more interesting. It makes me wonder how many old robots are lying around and leaping (slowly) into action at a time when they actually only make things worse. Hmm.
“Reason.” Okay, this one is just weird. It explores the possibility that self-aware robots may need to build their own philosophy; I’m not sure whether this is a commentary on human philosophy or not, but the humans regard the robot’s philosophy as completely preposterous. Eventually, they conclude that the philosophy is actually a manifestation of the Three Laws, which means, though this is not discussed, that the robot’s experience of itself is actually an illusion. There’s also a little bit of robotic benevolent duplicity here; this is explored in further depth in other stories, but Asimov is convinced that humans can’t bear to know that robots are better at some things than they are.
“Catch that Rabbit.” This is the only story in the collection where something actually goes wrong with the robot itself; in all other cases, there is just some error of interaction where humans have not accurately thought through the results of the programming. It’s a little longer and more involved, but it’s really about the deductive reasoning of the engineers. That’s three stories in a row with these two; they’re especially well-written in this one, but just as I began to get tired of them, the book switched to stories about other characters, so—good guessing, I suppose.
“Liar!” There are two ideas here, one overt and the other less so. The first is that the concept of “harm” works in the first law; it extends beyond physical harm all the way to simple hurt feelings. The second is that robots, or at least the robot in this story, do not have the capacity to think ahead or weigh harms against each other. So the robot prevents the humans from feeling hurt through the use of transparent lies that cause a lot more mischief when they are discovered. (It makes you wonder: would a robot be able to pull a human out of traffic at the cost of possibly bruising them?) However, in the other stories, robots do show an ability to think further ahead—either these are supposed to be more advanced robots (plausible) or Asimov didn’t want to get stuck in this problem. What’s done to the character of Susan Calvin in this story is a little cringeworthy, but if we don’t think of the stories as coming in this order (and in fact, this is the last one to be written), she’s a complex enough character that I think this is okay.
“Little Lost Robot.” This story, I think, is intended to underline the importance of the first law; you have one dangerous robot hidden among all the others and undetectable by normal means—so, essentially, a Cylon. The first law is modified in this robot, much to the wrath of Susan Calvin, who is given the opportunity to monologue rather impressively about how important it is to keep these laws in place in order to ensure that robots remain subordinate to humans. At the same time, this is probably the story that anthropomorphizes the robots the most, imputing emotions such as resentment and smugness to them. It’s a little disturbing because, while imputing all these emotional reactions to robots and generally appearing to regard them as sentient beings with something resembling a human moral compass, Calvin’s immediate response to the description of the problem with this robot is, “Destroy them all.”
“Escape!” I said “Reason” was weird, but I think this one is the weirdest. It’s the most complex and interesting story and it brings in all the US Robots characters together. And this is a case where I don’t even want to say too much more about it—but it’s pretty fun. Notice that the behavior of the Brain here contradicts the behavior of Herbie in “Liar!” but then, the Brain is clearly a much more sophisticated robot.
“Evidence.” And here we start getting into politics, with a political figure (Byerley) who is suspected of being a robot. What stood out to me here, and I hope that I’m not the only one who thinks so, is the queer subtext. You have this political figure who is upstanding and highly qualified, whose opponents find him objectionable for ideological reasons and thus want to dig up dirt on him. Eventually, they insinuate to the public that he is a robot. There are a few clues to back this rumor up, but the strongest hint is the man with whom Byerley lives. Calling him a robot is an attack on his character intended to manipulate the prejudices of certain sectors of the human population. The rumor can never be disproved because anything he does or doesn’t do can be reinterpreted to support it, so it’s really just a witch hunt. It’s pointed out several times that there is no real reason that his being a robot should discredit him, and he deals with the attacks with equanimity, relying on the rights that society grants to citizens even as his citizenship is made suspect. So, okay, maybe this doesn’t have to be read as a metaphor for the closet—but that is a powerful way of reading it and that’s how I will always think of this story.
“The Evitable Conflict.” In a way, I guess this is mostly the story that I’d been dealing with above; this is the one where robots run the world with rationality, justice and excellent mathematical abilities. I’ve dealt with the big stuff about this above, but what I’d like to point out here is a smaller moment near the end, where Susan Calvin points out:
How do we know what the ultimate good of Humanity will entail? We haven’t at our disposal the infinite resources that the Machine has at its! Perhaps, to give you a not unfamiliar example, our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less people and less culture, would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good—and we would then fight change. (192)
So there are limits to the optimism here! She goes on to point out several other possible configurations of society. In any case, while I don’t see how it’s possible for a civilization to have “less culture” (what does that even mean?), I’m intrigued by the way that Asimov both carves out a milieu and hedges its universality by pointing out that it doesn’t need to be the way it is, that all extrapolations of the future from the present are necessarily limited. In any case, the reference to an “agrarian or pastoral civilization” makes me think of LeGuin’s massively underappreciated Always Coming Home, which wasn’t written for quite a while after this but which takes him up on this, imagining a future in Northern California which is really more like its precolonial past than anything else. Is there any relationship between the two books, really? I don’t know, but it is certainly interesting to contrast their philosophies of science fiction.
(Yes, I do realize that LeGuin also wrote quite a bit of more traditional science fiction with space exploration and ansibles and things, but that is not my point here.)