If Anyone Builds It, Everyone Dies, 9781847928931
Paperback
Superintelligent AI: a global suicide bomb. Halt development now.
Pre-Order

If Anyone Builds It, Everyone Dies

the case against superintelligent ai

$33.60

  • Paperback

    304 pages

  • Release Date

    30 September 2025

Check Delivery Options

Summary

The AI Apocalypse: Why Building Superintelligence Guarantees Our Demise

The founder of the field of AI risk explains why superintelligent AI is a global suicide bomb and we must halt development immediately.

AI is the greatest threat to our existence that we have ever faced. The technology may be complex, but the facts are simple:

  • We are currently on a path to build superintelligent AI.
  • When we do, it will be vastly more powerful than us.
  • Whet…

Book Details

ISBN-13:9781847928931
ISBN-10:1847928935
Author:Eliezer Yudkowsky, Nate Soares
Publisher:Vintage Publishing
Imprint:The Bodley Head Ltd
Format:Paperback
Number of Pages:304
Release Date:30 September 2025
Weight:700g
Dimensions:234mm x 153mm x 40mm
What They're Saying

Critics Review

The most important book I’ve read for years: I want to bring it to every political and corporate leader in the world and stand over them until they’ve read it. Yudkowsky and Soares, who have studied AI and its possible trajectories for decades, sound a loud trumpet call to humanity to awaken us as we sleepwalk into disaster. Their brilliant gift for analogy, metaphor and parable clarifies for the general reader the tangled complexities of AI engineering, cognition and neuroscience better than any book on the subject I’ve ever read, and I’ve waded through scores of them. We really must rub our eyes and wake the fuck up! – Stephen FryIf Anyone Builds It, Everyone Dies may prove to be the most important book of our time. Yudkowsky and Soares believe we are nowhere near ready to make the transition to superintelligence safely, leaving us on the fast track to extinction. Through the use of parables and crystal-clear explainers, they convey their reasoning, in an urgent plea for us to save ourselves while we still can – Tim Urban, co-founder of Wait But WhyThe best no-nonsense, simple explanation of the AI risk problem I’ve ever read – Yishan Wong, former CEO of RedditSoares and Yudkowsky lay out, in plain and easy-to-follow terms, why our current path toward ever-more-powerful AIs is extremely dangerous – Emmett Shear, former interim CEO of OpenAI

About The Author

Eliezer Yudkowsky

Eliezer Yudkowsky (Author)

Eliezer Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), and the founder of the field of AI alignment research. He is one of the most influential thinkers and writers on the topic of AI risk, and his TIME magazine op-ed of 2023 is largely responsible for sparking the current concern and discussion around the potential for human extinction.

Nate Soares (Author)

Nate Soares is the president of MIRI and one of its seniormost researchers. He has been working in the field of AI alignment for over a decade, after previous experience at Microsoft and Google.

Returns

This item is eligible for free returns within 30 days of delivery. See our returns policy for further details.

Frequently Bought Together