List of p(doom) values
p(doom) is the probability of very bad outcomes (e.g. human extinction) as a result of AI. This most often refers to the likelihood of AI taking over from humanity, but different scenarios can also constitute "doom". For example, a large portion of the population dying due to a novel biological weapon created by AI, social collapse due to a large-scale cyber attack, or AI causing a nuclear war. Note that not everyone is using the same definition when talking about their p(doom) values. Most notably the time horizon is often not specified, which makes comparing a bit difficult.
-
Yann LeCun
one of three godfathers of AI, works at Meta(less likely than an asteroid) -
Vitalik Buterin
Ethereum founder -
Geoff Hinton
one of three godfathers of AI(wipe out humanity in the next 20 years) -
Machine learning researchers
(From 2023, depending on the question design, median values: 5-10%) -
Lina Khan
head of FTC -
Paul Christiano
(Cumulative risks go to 50% when you get to human-level AI) -
Elon Musk
CEO of Tesla, SpaceX, X -
Dario Amodei
CEO of Anthropic -
Yoshua Bengio
one of three godfathers of AI -
Emmett Shear
Co-founder of Twitch, former short-term CEO of OpenAI -
AI Safety Researchers
(Mean from 44 AI safety researchers in 2021) -
Scott Alexander
Popular Internet blogger at Astral Codex Ten -
Eli Lifland
-
AI engineer
(Estimate mean value, survey methodology may be flawed) -
Joep Meindertsma
Founder of PauseAI(The remaining 60% consists largely of "we can pause".) -
Holden Karnofsky
Executive Director of Open Philanthropy -
Jan Leike
alignment lead at OpenAI -
Zvi Mowshowitz
AI researcher -
Daniel Kokotajlo
Forecaster & former OpenAI researcher -
Dan Hendrycks
Head of Center for AI Safety -
Eliezer Yudkowsky
Founder of MIRI -
Roman Yampolskiy
AI safety scientist
Do something about it
However high your p(doom) is, you probably agree that we should not allow AI companies to gamble with our future. Join PauseAI to prevent them from doing so.