Warning — spoilers for the following works! (Images used mainly taken from the Internet Speculative Fiction Database.)
“Home Is the Hangman,” story by Roger Zelazny
“Due Process,” story by D.C. Poyer
2001: A Space Odyssey, film by Stanley Kubrick, novel by Arthur C. Clarke
“Counting Casualties,” story by Yoon Ha Lee
“Is My Toddler a Stochastic Parrot?” illustrated essay by Angie Wang
I have a new science fiction story out at Reactor magazine (formerly tor.com.) Many thanks to editor Jonathan Strahan for his thoughtful insights and artist Sara Wong for the illustration. (Wong’s image really captures aspects of the text in a way that I appreciate more every time I look at it.)
It’s an offbeat story for me in that the heart of it is dialogue. I tend to write stories with a lot of wild and far-out settings; they would be very expensive to film. This one takes place mostly within one neighborhood. I think the only story I’ve done that’s as dialogue-focused as “Nine Billion Turing Tests” is “Waiting for a Me Like You” (Fantasy & Science Fiction November-December 2012), which takes place in a single office and which played out in my head like a Twilight Zone episode.
The main dialogue in “Nine Billion Turing Tests,” as the title suggests, is a conversation between a software engineer and a prototype therapeutic chatbot. Although the chatbot in my story insists it isn’t self-aware, I was inspired by various stories about artificial intelligences talking with humans.
So here is a list of four fictional stories involving human-machine dialogue, along with one essay I read recently that I thought really captured the moment we’re in, as regards Chat-GPT and other “generative AI” conversationalists.
(For what little it’s worth, my background’s in English literature and library science, so I don’t really have a solid resume for talking about AI in any form, though that won’t stop me from telling stories about it.)
Here are some works on human-machine interaction that tickled my brain.
* * *
“Home Is the Hangman,” story by Roger Zelazny. (Analog magazine November 1975)
This is the third and last of Zelazny’s stories of a nameless investigator living in a near future of centralized tracking and monitoring. Our protagonist is “off the grid” — not because he lives in the woods, but because he was there at the creation of the system and gained the unique privilege of being invisible to it. By design, the global surveillance system simply will not record his presence. Luckily for the world he uses his powers for good as a private investigator. I have to admit I haven’t read the other stories in that sequence, even though the premise is great.
“Home Is the Hangman” stands wonderfully well on its own however. Spoilers for the plot beyond this point: the Hangman is a humanoid space probe that was originally operated remotely by a team of humans on Earth. This operation was far more hands-on than the situation with, say, a Mars rover. The operators experienced a kind of virtual reality simulation of being present on the Hangman’s explorations. Because of the time lag in communicating with the Hangman’s distant location, the Hangman also had some independent decision-making ability. In time the Hangman became more and more self-aware.
When the Hangman returns to Earth and one of its operators dies under suspicious circumstances, the other operators suspect the Hangman has become sapient and is enacting revenge, Frankenstein-style, against those who shaped its mind. The nameless protagonist is called in.
The story begins as a mystery but it becomes what I think is one of Zelazny’s more philosophical stories. Without giving too much else away, in the resolution the protagonist engages in a fascinating dialogue with the Hangman.
— Then I take it you feel you are possessed of free will?
— Marvin Minsky once said that when intelligent machines were constructed, they would be just as stubborn and fallible as men on these questions.
— Nor was he incorrect. What I have given you on these matters is only my opinion. I choose to act as if it were the case. Who can say that he knows for certain?
This is a recurring theme in such stories — the question of machine consciousness runs smack into the question of human consciousness. Machines with free will are interesting to contemplate at a time when you can find scientists seriously arguing, like Robert Sapolsky, that there’s no such thing as human free will. (Will we some day have scientists who argue that machines have free will but humans don’t? Now there’s a story idea …)
Another interesting theme in “Hangman” is guilt, and whether the capacity for guilt is necessary for true intelligence. (This is a theme that Greg Bear also explored in his great 1990 novel Queen of Angels.)
Zelazny had a talent for writing dialogue for nonhuman creatures. His 1982 collaboration with Fred Saberhagen, Coils, also has a memorable discussion with an artificial intelligence. His 1966 standalone novella “For a Breath I Tarry,” is a wry and moving comedy about intelligent machines bickering over the ruins of human civilization. His 1982 novel Eye of Cat has some fascinating back-and-forths with an extraterrestrial intelligence. And his other works, especially his Amber series, are peppered with talkative nonhumans of various kinds. I always find Zelazny satisfying to return to, in no small part because of the thoughtful conversations that take place throughout his stories.
“Due Process,” story by D.C. Poyer. (Galileo magazine, May 1979)
Ah, Galileo magazine! I had a subscription in middle school. There were a lot of great stories in there (I think they may have been the first to publish Connie Willis, for example.) D.C. Poyer contributed several tales and I alway liked his work. Especially memorable was “Due Process,” about a future Supreme Court trying to decide if an artificial intelligence can have civil rights.
In the story, the device called Eric has been taken by a scientist from the lab of its construction. Is this theft or freeing a person from captivity? The case hinges on what the justices make of Eric. We, the audience, see some of the events through Eric’s perspective, so we’re in no doubt that Eric thinks. We also get into the head of the chief justice (if I remember correctly) who is sincerely torn on the issue and wondering what the controversy will do to the country.
Eric is a soft-spoken being, a gentle soul even, and his sincerity is so strong he even convinces an assassin not to deactivate him, telling her he forgives her and that he loved being alive. She curses him for, as she sees it, coldly manipulating her emotions. But she relents. Eric is likewise humble before the justices. He says that his main developer said that if human mental processes were like upwellings from the ocean, Eric’s were more like upwellings from a duck pond. But like the Hangman in Zelazny’s story Eric nonetheless asserts that he is an intelligent being.
I found the legal “out” the court finds for Eric’s status to be clever and surprising. It makes me wonder if a similar case will one day be brought before a real court.
2001: A Space Odyssey, film by Stanley Kubrick, novel by Arthur C. Clarke. (both 1968)
Well, of course this had to be here. Even more than the iconic monolith, the first thing everybody remembers about this story, movie or book version, is the bland voice of HAL 9000 politely declining to let astronaut David Bowman back aboard the spaceship Discovery: “I’m sorry, Dave. I’m afraid I can’t do that.” HAL is chilling precisely because he means what he says. He really is sorry. He really thinks it’s unfortunate he had to kill the other astronauts. And of course he really does have the highest enthusiasm for the mission. That’s why he had to start killing people, after all.
Readers should correct me if I’m wrong, but I believe the famous film and the nearly as famous book were parallel developments, and that Kubrick and Clarke considered the book one of multiple possible interpretations of the film. The film, meanwhile, does not explain itself. I’ll necessarily have to lean on the book’s interpretations here, but I will be thinking of the film too.
Clarke tells us that HAL 9000’s issue is a problem of competing directives. As a machine he can’t bend the rules he’s been given in the same way that we duplicitous humans can. Having been told that the mission to reach the monolith in the outer solar system (at Jupiter in the movie, Saturn in the book) is of utmost importance, HAL must treat as secondary the imperative to protect the lives of the astronauts aboard the Discovery. And because his orders include hiding the truth about the mission from the astronauts — that alien life exists and Discovery is investigating it — HAL is drawn to the conclusion that the astronauts are a threat to the mission. Thus when the crew of Discovery starts questioning HAL, they’re done for. Dave Bowman proves clever and resilient enough to survive, deactivate HAL, and continue the mission on his own.
Curiously, in Clarke’s version of 2001 there is not one artificial intelligence in the story but five: HAL 9000 and four alien monoliths. (The monoliths also appear in the film but are not so clearly explained.) Clarke’s exposition tells us that the monoliths have been left behind by an alien species that has transcended biology to become some manner of remote, ethereal life form. The monoliths continue the aliens’ work of promoting intelligence throughout the galaxy. They stimulated the minds of humanity’s remote ancestors with one monolith, left a second buried on the Moon so that sunlight hitting it would trigger a message to the outer solar system, where a third stands ready to transport a human to a distant world — a world where a fourth is ready to transform that human into a higher form of life. (In the film we can certainly draw this conclusion but it’s not stated; it’s possible something else is going on.)
The thematic link between HAL and the monoliths seems implied in the film by the dark rectangular panels upon which HAL’s camera eyes appear. But what are we to make of this connection? Especially when the monoliths have absolutely nothing to say, but HAL is the most talkative character in the story? That both computer and monoliths are intended for a purpose seems clear. HAL eases the way for humans to explore space, and the monoliths manipulate other life forms on behalf of their creators. Perhaps, if humanity were to advance as far the monolith-makers once did, HAL’s successor machines would have capabilities like the monoliths. We aren’t privy to any communications between the monoliths and their makers (maybe there aren’t any such conversations) but HAL’s talks with humans seem very detached from human passions. He does not assert his own consciousness, either fiercely like the Hangman or humbly like Eric. In talking with his makers he comes across as blandly cooperative, even when he’s busy killing them.
If we see HAL as thematically connected to the monoliths, maybe that implies an explanation for why the monoliths’ creators are nowhere to be seen. In the film version we don’t have the book’s narration to confirm that the monolith makers have ascended to some higher plane of existence. Maybe in the universe of the film what happened to them was more akin to what happened to the crew of Discovery. It may be that the monoliths had the highest enthusiasm for the mission.
This is a newer story so I will be a bit more careful about spoilers. In a distant future, or parallel universe, or galaxy far, far away, a fleet of starships is fighting the “deaders,” an enemy fleet that lays waste to planets and somehow erases those worlds’ art forms from existence, so that no memory of that particular manner of poetry, or calligraphy, or illustration, remains to civilization. The fighting ships are, like 2001’s Discovery, controlled by AIs, who lend their names to the ships themselves, such as counting casualties (the lower-case is deliberate.) These machines are called faces, and are accepted as important members of the fleet’s war council:
“The faces projected themselves uniformly as black jackal masks with hellspark eyes, each considerately labeled with its name. Faces had a certain respect for tradition. As counting casualties liked to say, humans were so short-lived and changeable that it was nice to have some things to rely on, like basic protocol and the perennial popularity of coffee … Once I asked the highship’s face why it needed humans at all. After all, it had access to a variety of robots to perform maintenance chores, and it could trivially split its attention. It said only that it liked having someone to remember.”
In this setting the artificial intelligences of humans are far more advanced than in the previous stories mentioned. Indeed, the humans are present aboard ship seemingly almost for aesthetic reasons rather than practical ones. The tools no longer need the makers’ hands. Interestingly the ships are not taken by the deaders’ erasure weapons — the resolution of the question why is tied up with the ending, so I won’t go into that. But it’s an intriguing, albeit grim, ending.
(An interesting comparison is Iain M. Banks’ Culture universe, in which the starships of the eponymous Culture are also guided by artificial intelligences whose names are equivalent to the names of the ships. The tone is quite different, however, in that Culture ships, unlike the ships of the Coalition in “Counting Casualties,” are rarely warships. They tend to be snakily playful in their attitude, rather than committed to a command structure.)
As in 2001 the voyagers are confronted with an overwhelming enigma, but at least in this case their sentient machines are reliable allies. Yoon Ha Lee is one of my favorite prose stylists and he is in top form here.
“Is My Toddler a Stochastic Parrot?” illustrated essay by Angie Wang. (The New Yorker, November 15, 2023. Link in title, may be paywalled)
Wang, who wrote and illustrated this wonderful essay, compares her experiences with Chat-GPT with her experience of helping her toddler learn words. Both could be said to be learning language; are they in some sense similar? Wang writes:
“Our outputs — machine and human — can seem so similar, almost indistinguishable, that Sam Altman, the C.E.O. of OpenAI, tweeted shortly after the release of ChatGPT,
i am a stochastic parrot,
and so r u
But, despite superficial similarities in output, we are not the same.” (Italics mine to distinguish Altman’s words from Wang’s.)
In both illumination of and counterpoint to Wang’s thoughts on large language models, we see a graphic-novel style journey of mother and toddler, exploring the world. (Which as a dad of teenagers takes me back!) Importantly (I think) the cartoon-like humans are surrounded by detailed illustrations of the natural world, as though that world is gently calling to us to stay grounded. And indeed, at a crisis point in the essay/tale, as the narration says “I had the dizzying sense of the ground giving way under our feet,” mom and toddler tumble into an abyss filled with white-on-black accounts of machine learning seeming to displace humans (such as “Losing to AlphaGo caused Lee Sedol to quit Go” and “Diffusion-model users brag about how artists spend days drawing something they can imitate in seconds.”) Resolution begins with the words “What do other people matter to us?” and the characters returning to Earth and staring out at an infinite seascape.
The essay argues for a creative re-focusing on the real — such as the experience of motherhood, of art, of connection with real people, nature, the messy reality of human life. A toddler can’t learn language the way a large language model can but goes the slow human route, bringing along a vast world of human interiority in a way no (existing) machine can.
I really like the conclusion and I think the message of human value holds even if we someday have true thinking machines. I won’t speak for HAL 9000 but I think if they were real the Hangman, Eric, and counting casualties would appreciate this essay.