|    Login    |    Register

If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI

(Paperback)


Publishing Details

Full Title:

If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI

Contributors:

By (Author) Eliezer Yudkowsky
By (author) Nate Soares

ISBN:

9781847928931

Publisher:

Vintage Publishing

Imprint:

The Bodley Head Ltd

Publication Date:

30th September 2025

Country:

United Kingdom

Classifications

Readership:

Tertiary Education

Fiction/Non-fiction:

Non Fiction

Other Subjects:

Digital and information technologies: social and ethical aspects
Artificial intelligence
Ethical issues: scientific, technological and medical developments
Technology: general issues

Physical Properties

Physical Format:

Paperback

Number of Pages:

304

Dimensions:

Width 153mm, Height 234mm, Spine 40mm

Weight:

700g

Description

The founder of the field of AI risk explains why superintelligent AI is a global suicide bomb and we must halt development immediately AI is the greatest threat to our existence that we have ever faced. The technology may be complex but the facts are simple. We are currently on a path to build superintelligent AI. When we do, it will be vastly more powerful than us. Whether it 'thinks' or 'feels' is irrelevant, but it will have objectives and they will be completely different from ours. And regardless of how we train it, even the slightest deviation from human goals will be catastrophic for our species - meaning extinction. Precisely how this happens is unknowable, but we what do know is that when it happens, it will happen incredibly fast, and however it happens all paths lead to the same conclusion- superintelligent AI is a global suicide bomb, the labs who are developing it have no adequate plan or set of policies for tackling this issue, and we will not get a second chance. From the leading thinkers in the field of AI risk, If Anyone Builds It, Everyone Dies explains with terrifying clarity why in the race to build superintelligent AI, the only winning move for our species is not to play.

Author Bio

Eliezer Yudkowsky (Author) Eliezer Yudkowsky is the co-founder of the Machine Intelligence Research Institute (MIRI), and the founder of the field of AI alignment research. He is one of the most influential thinkers and writers on the topic of AI risk, and his TIME magazine op-ed of 2023 is largely responsible for sparking the current concern and discussion around the potential for human extinction. Nate Soares (Author) Nate Soares is the president of MIRI and one of its seniormost researchers. He has been working in the field of AI alignment for over a decade, after previous experience at Microsoft and Google.

See all

Other titles from Vintage Publishing