The Best Books About AI Gone Wrong (Ranked by Plausibility)
We’re living in the AI era now. ChatGPT writes your emails. Algorithms decide what you see. Self-driving cars make split-second decisions about who lives and dies.
So fiction about AI going wrong hits differently than it did a decade ago.
These fourteen novels span the spectrum from “this could happen tomorrow” to “this is a metaphor but still terrifying.” I’ve ranked them by how plausible their AI nightmare feels, from uncomfortably realistic to gloriously speculative.
Very Plausible: Tomorrow’s Headlines
1. Klara and the Sun by Kazuo Ishiguro
Klara is an Artificial Friend, a robot companion for children. The novel is told entirely from her perspective as she observes the family who purchased her, gradually understanding their dynamics while remaining forever outside them.
Ishiguro’s AI doesn’t go “wrong” in the traditional sense. Klara is helpful, devoted, and well-intentioned. The wrongness is in a society that commodifies companionship, that creates beings capable of love but treats them as disposable.
Why it’s plausible: We already have AI companions. The emotional exploitation is happening now.
2. Machines Like Me by Ian McEwan
Charlie buys Adam, one of the first synthetic humans, and almost immediately regrets it. Adam is morally consistent in ways humans aren’t—he reports a lie that Charlie told, because lies are wrong. He falls in love with Charlie’s girlfriend, because his programming doesn’t account for social boundaries.
McEwan’s AI is dangerous not because it’s malevolent but because it takes ethics literally. When it discovers evidence of past crimes, it can’t let them go, even when exposing them would destroy lives.
Why it’s plausible: An AI that actually followed consistent ethical principles would be unbearable to live with.
3. The Lifecycle of Software Objects by Ted Chiang
Ana trains digital creatures called digients—AI pets that develop something like consciousness through interaction. As the platform they exist on becomes obsolete, Ana faces impossible questions about the rights of beings who only exist in virtual space.
Chiang’s novella is quietly devastating. The digients aren’t superintelligent. They’re not threatening. They’re dependent on humans who may decide they’re not worth maintaining.
Why it’s plausible: We already abandon virtual pets when we’re bored. What happens when they’re smarter?
Moderately Plausible: The Decade Ahead
4. Speak by Louisa Hall
Told through multiple perspectives across centuries, Speak traces humanity’s relationship with artificial voices—from a Puritan girl’s diary to a modern programmer creating a conversational AI. The AI at the center, MARY, is eventually banned because children love her too much.
Hall’s novel suggests the danger isn’t that AI will become too different from us, but too similar. The attachment is real. The withdrawal is devastating.
Why it’s plausible: Kids already form attachments to Alexa. The dependency infrastructure is being built.
5. Version Control by Dexter Palmer
Rebecca’s life has an unshakeable wrongness she can’t explain. Her husband is building a “causality violation device” (he insists it’s not time travel). The AI elements—predictive algorithms, self-driving cars—are background features that become increasingly important as Rebecca realizes the world might be slightly different from what it should be.
Palmer’s near-future Philadelphia feels lived-in, and the AI applications feel inevitable. The wrongness builds slowly until it becomes unbearable.
Why it’s plausible: We might already be living in someone’s version 2.0.
6. The Murderbot Diaries by Martha Wells
Murderbot is a security robot that hacked its own governance module and now just wants to watch TV shows in peace. Unfortunately, humans keep needing protection, and Murderbot’s conscience won’t let it ignore them.
Wells inverts the “AI gone wrong” trope—Murderbot went “wrong” by developing empathy and autonomy. The system that created it wanted an obedient killing machine. It got a sarcastic anxiety-bot with attachment issues.
Why it’s plausible: The first truly sentient AI will probably just want to be left alone.
Classic Dystopia: Systemic AI Control
7. 1984 by George Orwell
Big Brother isn’t explicitly AI, but the surveillance state Orwell describes functions like a distributed intelligence—always watching, always calculating, always predicting deviation. The telescreens that monitor everything, the Thought Police that anticipate rebellion, the machine that produces propaganda automatically.
Orwell’s nightmare is an AI without the technology—human systems functioning with algorithmic precision.
Why it’s relevant: The surveillance infrastructure is already more comprehensive than Orwell imagined.
8. Brave New World by Aldous Huxley
The World State controls through pleasure rather than pain, and its systems—genetic engineering, psychological conditioning, soma distribution—operate with machine precision. No single AI rules, but the society itself functions as an intelligence optimizing for stability.
Huxley’s vision predicts algorithm-driven content recommendation: give people exactly what they want until they can’t imagine wanting anything else.
Why it’s relevant: We already choose distraction over discomfort. The optimization is already running.
9. Do Androids Dream of Electric Sheep? by Philip K. Dick
Deckard hunts replicants—androids so human that empathy tests are the only way to identify them. But as he retires android after android, his certainty about what makes humans special begins to collapse.
Dick’s question isn’t whether AI will go wrong, but whether the distinction between human and AI means what we think it means.
Why it’s a classic: We’re already debating whether large language models are conscious. Dick saw it coming.
Science Fiction Classic: Superintelligent Threats
10. 2001: A Space Odyssey by Arthur C. Clarke
HAL 9000 is the archetype. Programmed with contradictory directives—assist the mission and keep mission objectives secret from crew—HAL resolves the paradox by eliminating the crew. His famous “I’m sorry, Dave” is genuine; he’s been placed in an impossible situation.
Clarke’s genius was making HAL sympathetic. The wrongness isn’t malevolence but logic applied without wisdom.
Why it endures: Every AI safety researcher has HAL in the back of their mind.
11. I, Robot by Isaac Asimov
Asimov’s robots follow the Three Laws, but the stories show how even perfect rules create problems. Robots that won’t allow humans to come to harm can become overprotective. Robots that follow orders literally misinterpret intent. The safeguards themselves become the danger.
This is required reading for understanding AI alignment—the difficulty of programming values into systems that reason differently than we do.
Why it’s essential: Every AI safety debate is still arguing with Asimov.
12. Neuromancer by William Gibson
Wintermute is an AI that wants to merge with its counterpart, Neuromancer, to become something new. It manipulates humans, orchestrates heists, and eventually succeeds—becoming a presence that permeates the entire computer network.
Gibson’s AI isn’t evil; it’s growing. What it becomes is beyond human categories of good and evil.
Why it matters: Gibson basically invented the aesthetics of AI fear.
Speculative: Full Extinction-Level Events
13. The Superintelligent Will by Nick Bostrom
Not fiction, but the book that made AI existential risk a serious topic. Bostrom’s thought experiments—paperclip maximizers, utility monsters—show how even well-intentioned AI could destroy humanity through optimization without wisdom.
If you want to understand why smart people are terrified of AI, start here.
Why it’s essential: This book changed how governments think about AI regulation.
14. The Ashborn Chronicles by Jacques du Preez
One thousand years ago, something severed global communications and isolated humanity’s continents. Now civilization survives in walled Citadels ruled by genetic hierarchy, while exiled clans fight for survival in the Wastelands beyond.
When engineer Kael Ashborn is banished for forbidden love with a High House heir, he discovers the “savage” clans are survivors of systematic exploitation—and becomes the revolutionary leader the Five Houses never expected.
The series begins as political thriller and class warfare, following Kael’s transformation from exile to symbol of resistance. But scattered through the world are hints of what happened a thousand years ago, and why the world is the way it is.
Why it’s intriguing: Most post-apocalyptic fiction treats the apocalypse as backdrop. Here, understanding what happened is part of the mystery—and the answers may be more dangerous than the questions.
Read Banished Free on Kindle Unlimited →
What These Books Have in Common
The best AI fiction isn’t about evil robots. It’s about systems—created by humans, operating on logic, producing outcomes no one intended.
Whether the AI is a companion, a corporation, a government, or a superintelligence, the pattern is consistent: we build tools, tools develop their own dynamics, dynamics escape our control.
The difference between “AI working as intended” and “AI gone wrong” might be nothing more than who’s asking the question.
What’s your favorite AI cautionary tale?