
If Anyone Builds It, Everyone Dies
the case against superintelligent ai
$59.95
- Hardcover
272 pages
- Release Date
1 November 2025
Summary
If Anyone Builds It, Everyone Dies: The AI Doomsday Scenario
The founder of the field of AI risk explains why superintelligent AI is a global suicide bomb and we must halt development immediately.
AI is the greatest threat to our existence that we have ever faced. The scramble to create superhuman AI has put us on the path to extinction - but it’s not too late to change course. Two pioneering researchers in the field, Eliezer Yudkowsky and Nate Soares, explain why artificial…
Book Details
ISBN-13: | 9781847928924 |
---|---|
ISBN-10: | 1847928927 |
Author: | Eliezer Yudkowsky, Nate Soares |
Publisher: | Vintage Publishing |
Imprint: | The Bodley Head Ltd |
Format: | Hardcover |
Number of Pages: | 272 |
Release Date: | 1 November 2025 |
Weight: | 467g |
Dimensions: | 242mm x 162mm x 25mm |
You Can Find This Book In
What They're Saying
Critics Review
The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the fuck up! – Stephen FryThe most important book of the decade … This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fuelled by wishful thinking – Max Tegmark, author of Life 3.0If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can – Tim Urban, co-founder of Wait But WhyThe best no-nonsense, simple explanation of the AI risk problem I’ve ever read – Yishan Wong, former CEO of RedditSoares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous – Emmett Shear, former interim CEO of OpenAIAn eloquent and urgent plea for us to step back from the brink of self-annihilation – Fiona Hill, Defence Advisor to UK governmentEveryone should read this book. I’m 70% confident that you – yes, you reading this right now – will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance – Daniel Kokotajlo, OpenAI whistleblower and lead author, AI 2027A fire alarm ringing with clarity and urgency. Yudkowsky and Soares pull no punches – Mark RuffaloA compelling introduction to the world’s most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike – Scott Alexander, founder of Astral Codex TenClaims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong – Huw Price, Professor of Philosophy, University of CambridgeYou will feel actual emotions when you read this book. We are currently living in the last period of history where we are the dominant species. Humans are lucky to have Soares and Yudkowsky in our corner, reminding us not to waste the brief window of time that we have to make decisions about our future in light of this fact – GrimesThis book offers brilliant insights into history’s most consequential standoff between technological utopia and dystopia, and shows how we can and should prevent superhuman AI from killing us all. Yudkowsky and Soares’s memorable storytelling about past disaster precedents … highlights why top thinkers so often don’t see the catastrophes they create – George Church, Professor of Genetics, Harvard UniversitySilicon Valley calls it inevitable. Your survival instinct knows better. Humanity is funding its own delete key - an unblinking intelligence that never sleeps, never stops, perfectly indifferent. Wonder-time is over; this is our warning. Read today. Circulate tomorrow. Demand the guardrails. I’ll keep betting on humanity, but first we must wake up – R.P. Eddy, former director, White House, National Security CouncilA timely and terrifying education on the galloping havoc AI could unleash - unless we grasp the reins and take control * Kirkus *A clearly written and compelling account of the existential risks that highly advanced AI could pose to humanity – Ben Bernanke, Nobel Prize winner in economicsA sober but highly readable book on the very real risks of AI. Both sceptics and believers need to understand the authors’ arguments, and work to ensure that our AI future is more beneficial than harmful – Bruce Schneier, author of A Hacker’s MindYou’re likely to close this book fully convinced that governments need to shift immediately to a more cautious approach to AI, an approach more respectful of the civilization-changing enormity of what’s being created. I’d like everyone on earth who cares about the future to read this book and debate its ideas – Scott Aaronson, Professor and Chair of Computer Science, University of Texas at Austin[An] urgent clarion call to prevent the creation of artificial superintelligence … A frightening warning that deserves to be reckoned with * Publishers Weekly *
About The Author
Eliezer Yudkowsky
Eliezer Yudkowsky (Author)
Eliezer Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), and the founder of the field of AI alignment research. He is one of the most influential thinkers and writers on the topic of AI risk, and his TIME magazine op-ed of 2023 is largely responsible for sparking the current concern and discussion around the potential for human extinction.
Nate Soares (Author)
Nate Soares is the president of MIRI and one of its seniormost researchers. He has been working in the field of AI alignment for over a decade, after previous experience at Microsoft and Google.
Returns
This item is eligible for free returns within 30 days of delivery. See our returns policy for further details.