
If Anyone Builds It, Everyone Dies
the case against superintelligent ai
$32.00
- Paperback
272 pages
- Release Date
30 September 2025
Summary
If Anyone Builds It, Everyone Dies: The AI Doomsday Scenario
The founder of the field of AI risk explains why superintelligent AI is a global suicide bomb and we must halt development immediately.
AI is the greatest threat to our existence that we have ever faced. The scramble to create superhuman AI has put us on the path to extinction - but it’s not too late to change course. Two pioneering researchers in the field, Eliezer Yudkowsy and Nate Soares, explain why artificial …
Book Details
ISBN-13: | 9781847928931 |
---|---|
ISBN-10: | 1847928935 |
Author: | Eliezer Yudkowsky, Nate Soares |
Publisher: | Vintage Publishing |
Imprint: | The Bodley Head Ltd |
Format: | Paperback |
Number of Pages: | 272 |
Release Date: | 30 September 2025 |
Weight: | 334g |
Dimensions: | 234mm x 152mm x 21mm |
You Can Find This Book In
What They're Saying
Critics Review
The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the fuck up! – Stephen FryThe most important book of the decade … This captivating page-turner, from two of today’s clearest thinkers, reveals that the competition to build smarter-than-human machines isn’t an arms race but a suicide race, fuelled by wishful thinking – Max Tegmark, author of Life 3.0If Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can – Tim Urban, co-founder of Wait But WhyThe best no-nonsense, simple explanation of the AI risk problem I’ve ever read – Yishan Wong, former CEO of RedditSoares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous – Emmett Shear, former interim CEO of OpenAIAn eloquent and urgent plea for us to step back from the brink of self-annihilation – Fiona Hill, Defence Advisor to UK governmentEveryone should read this book. I’m 70% confident that you – yes, you reading this right now – will one day grudgingly admit that we all should have listened to Yudkowsky and Soares when we still had the chance – Daniel Kokotajlo, OpenAI whistleblower and lead author, AI 2027A fire alarm ringing with clarity and urgency. Yudkowsky and Soares pull no punches – Mark RuffaloA compelling introduction to the world’s most important topic. Artificial general intelligence could be just a few years away. This is one of the few books that takes the implications seriously, published right as the danger level begins to spike – Scott Alexander, founder of Astral Codex TenClaims about the risks of AI are often dismissed as advertising, but this book disproves it. Yudkowsky and Soares are not from the AI industry, and have been writing about these risks since before it existed in its present form. Read their disturbing book and tell us what they get wrong – Huw Price, Professor of Philosophy, University of Cambridge
About The Author
Eliezer Yudkowsky
Eliezer Yudkowsky (Author)
Eliezer Yudkowsky is a founding researcher of the field of AI alignment, with influential work spanning more than twenty years. As co-founder of the non-profit Machine Intelligence Research Institute (MIRI), Yudkowsky sparked early scientific research on the problem and has played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine’s 2023 list of the 100 Most Influential People In AI, and has been discussed or interviewed in the New York Times, New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, Washington Post, and elsewhere.
Nate Soares (Author)
Nate Soares is the president of the non-profit Machine Intelligence Research Institute (MIRI). He has been working in the field for over a decade, after previous experience at Microsoft and Google. Soares is the author of a large body of technical and semi-technical writing on AI alignment, including foundational work on value learning, decision theory, and power-seeking incentives in smarter-than-human AIs.
Returns
This item is eligible for free returns within 30 days of delivery. See our returns policy for further details.