Doom Debates – Détails, épisodes et analyse

Détails du podcast

Informations techniques et générales issues du flux RSS du podcast.

Doom Debates

Doom Debates

Liron Shapira

Technology
Business

Fréquence : 1 épisode/5j. Total Éps: 69

Substack
It's time to talk about the end of the world!

lironshapira.substack.com
Site
RSS
Apple

Classements récents

Dernières positions dans les classements Apple Podcasts et Spotify.

Apple Podcasts

  • 🇨🇦 Canada - technology

    22/04/2025
    #73
  • 🇬🇧 Grande Bretagne - technology

    02/01/2025
    #87
  • 🇬🇧 Grande Bretagne - technology

    01/01/2025
    #97
  • 🇨🇦 Canada - technology

    31/12/2024
    #74
  • 🇬🇧 Grande Bretagne - technology

    31/12/2024
    #90
  • 🇨🇦 Canada - technology

    30/12/2024
    #48
  • 🇬🇧 Grande Bretagne - technology

    30/12/2024
    #74
  • 🇨🇦 Canada - technology

    29/12/2024
    #77

Spotify

    Aucun classement récent disponible



Qualité et score du flux RSS

Évaluation technique de la qualité et de la structure du flux RSS.

See all
Qualité du flux RSS
À améliorer

Score global : 59%


Historique des publications

Répartition mensuelle des publications d'épisodes au fil des années.

Episodes published by month in

Derniers épisodes publiés

Liste des épisodes récents, avec titres, durées et descriptions.

See all

AI Will Kill Us All — Liron Shapira on The Flares

vendredi 27 décembre 2024Durée 01:23:36

This week Liron was interview by Gaëtan Selle on @the-flares about AI doom.

Cross-posted from their channel with permission.

Original source: https://www.youtube.com/watch?v=e4Qi-54I9Zw

0:00:02 Guest Introduction

0:01:41 Effective Altruism and Transhumanism

0:05:38 Bayesian Epistemology and Extinction Probability

0:09:26 Defining Intelligence and Its Dangers

0:12:33 The Key Argument for AI Apocalypse

0:18:51 AI’s Internal Alignment

0:24:56 What Will AI's Real Goal Be?

0:26:50 The Train of Apocalypse

0:31:05 Among Intellectuals, Who Rejects the AI Apocalypse Arguments?

0:38:32 The Shoggoth Meme

0:41:26 Possible Scenarios Leading to Extinction

0:50:01 The Only Solution: A Pause in AI Research?

0:59:15 The Risk of Violence from AI Risk Fundamentalists

1:01:18 What Will General AI Look Like?

1:05:43 Sci-Fi Works About AI

1:09:21 The Rationale Behind Cryonics

1:12:55 What Does a Positive Future Look Like?

1:15:52 Are We Living in a Simulation?

1:18:11 Many Worlds in Quantum Mechanics Interpretation

1:20:25 Ideal Future Podcast Guest for Doom Debates

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Roon vs. Liron: AI Doom Debate

mercredi 18 décembre 2024Durée 01:44:46

Roon is a member of the technical staff at OpenAI. He’s a highly respected voice on tech Twitter, despite being a pseudonymous cartoon avatar account. In late 2021, he invented the terms “shape rotator” and “wordcel” to refer to roughly visual/spatial/mathematical intelligence vs. verbal intelligence. He is simultaneously a serious thinker, a builder, and a shitposter.

 I'm excited to learn more about Roon, his background, his life, and of course, his views about AI and existential risk.

00:00 Introduction

02:43 Roon’s Quest and Philosophies

22:32 AI Creativity

30:42 What’s Your P(Doom)™

54:40 AI Alignment

57:24 Training vs. Production

01:05:37 ASI

01:14:35 Goal-Oriented AI and Instrumental Convergence

01:22:43 Pausing AI

01:25:58 Crux of Disagreement

1:27:55 Dogecoin

01:29:13 Doom Debates’s Mission

Show Notes

Follow Roon: https://x.com/tszzl

For Humanity: An AI Safety Podcast with John Sherman — https://www.youtube.com/@ForHumanityPodcast

Lethal Intelligence Guide, the ultimate animated video introduction to AI x-risk – https://www.youtube.com/watch?v=9CUFbqh16Fg

PauseAI, the volunteer organization I’m part of — https://pauseai.info/

Join the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

15-Minute Intro to AI Doom

lundi 4 novembre 2024Durée 15:52

Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.

If you haven't been following all the urgent warnings, I'm here to bring you up to speed.

* Human-level AI is coming soon

* It’s an existential threat to humanity

* The situation calls for urgent action

Listen to this 15-minute intro to get the lay of the land.

Then follow these links to learn more and see how you can help:

* The Compendium

A longer written introduction to AI doom by Connor Leahy et al

* AGI Ruin — A list of lethalities

A comprehensive list by Eliezer Yudkowksy of reasons why developing superintelligent AI is unlikely to go well for humanity

* AISafety.info

A catalogue of AI doom arguments and responses to objections

* PauseAI.info

The largest volunteer org focused on lobbying world government to pause development of superintelligent AI

* PauseAI Discord

Chat with PauseAI members, see a list of projects and get involved

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Lee Cronin vs. Liron Shapira: AI Doom Debate

mercredi 30 octobre 2024Durée 01:31:58

Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented “Assembly Theory” as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.

Today we’re debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.

00:00 Introduction

04:20 Assembly Theory

05:10 Causation and Complexity

10:07 Assembly Theory in Practice

12:23 The Concept of Assembly Index

16:54 Assembly Theory Beyond Molecules

30:13 P(Doom)

32:39 The Statement on AI Risk

42:18 Agency and Intent

47:10 RescueBot’s Intent vs. a Clock’s

53:42 The Future of AI and Human Jobs

57:34 The Limits of AI Creativity

01:04:33 The Complexity of the Human Brain

01:19:31 Superintelligence: Fact or Fiction?

01:29:35 Final Thoughts

Lee’s Wikipedia: https://en.wikipedia.org/wiki/Leroy_Cronin

Lee’s Twitter: https://x.com/leecronin

Lee’s paper on Assembly Theory: https://arxiv.org/abs/2206.02279

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Ben Horowitz says nuclear proliferation is GOOD? I disagree.

vendredi 25 octobre 2024Durée 28:55

Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good.

I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.

If Ben and a16z can’t appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?

00:00 Introduction

00:49 Ben Horowitz on Nuclear Proliferation

02:12 Ben Horowitz on Open Source AI

05:31 Nuclear Non-Proliferation Treaties

10:25 Escalation Spirals

15:20 Rogue Actors

16:33 Nuclear Accidents

17:19 Safety Mechanism Failures

20:34 The Role of Human Judgment in Nuclear Safety

21:39 The 1983 Soviet Nuclear False Alarm

22:50 a16z’s Disingenuousness

23:46 Martin Casado and Marc Andreessen

24:31 Nuclear Equilibrium

26:52 Why I Care

28:09 Wrap Up

Sources of this episode’s video clips:

Ben Horowitz’s interview on Upstream with Erik Torenberg: https://www.youtube.com/watch?v=oojc96r3Kuo

Martin Casado and Marc Andreessen talking about AI on the a16z Podcast: https://www.youtube.com/watch?v=0wIUK0nsyUg

Roger Skaer’s TikTok: https://www.tiktok.com/@rogerskaer

George W. Bush and John Kerry Presidential Debate (September 30, 2004): https://www.youtube.com/watch?v=WYpP-T0IcyA

Barack Obama’s Prague Remarks on Nuclear Disarmament: https://www.youtube.com/watch?v=QKSn1SXjj2s

John Kerry’s Remarks at the 2015 Nuclear Nonproliferation Treaty Review Conference: https://www.youtube.com/watch?v=LsY1AZc1K7w

Show notes:

Nuclear War, A Scenario by Annie Jacobsen: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093

Dr. Strangelove or: How I learned to Stop Worrying and Love the Bomb: https://en.wikipedia.org/wiki/Dr._Strangelove

1961 Goldsboro B-52 Crash: https://en.wikipedia.org/wiki/1961_Goldsboro_B-52_crash

1983 Soviet Nuclera False Alarm Incident: https://en.wikipedia.org/wiki/1983_Soviet_nuclear_false_alarm_incident

List of military nuclear accidents: https://en.wikipedia.org/wiki/List_of_military_nuclear_accidents

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

“AI Snake Oil” Prof. Arvind Narayanan Can't See AGI Coming | Liron Reacts

dimanche 13 octobre 2024Durée 01:12:42

Today I’m reacting to Arvind Narayanan’s interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NY

Dr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.

Arvind claims AI is “normal technology like the internet”, and never sees fit to bring up the impact or urgency of AGI. So I’ll take it upon myself to point out all the questions where someone who takes AGI seriously would give different answers.

00:00 Introduction

01:49 AI is “Normal Technology”?

09:25 Playing Chess vs. Moving Chess Pieces

12:23 AI Has To Learn From Its Mistakes?

22:24 The Symbol Grounding Problem and AI's Understanding

35:56 Human vs AI Intelligence: The Fundamental Difference

36:37 The Cognitive Reflection Test

41:34 The Role of AI in Cybersecurity

43:21 Attack vs. Defense Balance in (Cyber)War

54:47 Taking AGI Seriously

01:06:15 Final Thoughts

Show Notes

The original Nonzero podcast episode with Arvind Narayanan and Robert Wright: https://www.youtube.com/watch?v=MoB_pikM3NY

Arvind’s new book, AI Snake Oil: https://www.amazon.com/Snake-Oil-Artificial-Intelligence-Difference-ebook/dp/B0CW1JCKVL

Arvind’s Substack: https://aisnakeoil.com

Arvind’s Twitter: https://x.com/random_walker

Robert Wright’s Twitter: https://x.com/robertwrighter

Robert Wright’s Nonzero Newsletter: https://nonzero.substack.com

Rob’s excellent post about symbol grounding (Yes, AIs ‘understand’ things): https://nonzero.substack.com/p/yes-ais-understand-things

My previous episode of Doom Debates reacting to Arvind Narayanan on Harry Stebbings’ podcast: https://www.youtube.com/watch?v=lehJlitQvZE

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Dr. Keith Duggar has a high P(doom)?! Debate with MLST Co-host

mardi 8 octobre 2024Durée 02:11:32

Dr. Keith Duggar from Machine Learning Street Talk was the subject of my recent reaction episode about whether GPT o1 can reason. But instead of ignoring or blocking me, Keith was brave enough to come into the lion’s den and debate his points with me… and his P(doom) might shock you!

First we debate whether Keith’s distinction between Turing Machines and Discrete Finite Automata is useful for understanding limitations of current LLMs. Then I take Keith on a tour of alignment, orthogonality, instrumental convergence, and other popular stations on the “doom train”, to compare our views on each.

Keith was a great sport and I think this episode is a classic!

00:00 Introduction

00:46 Keith’s Background

03:02 Keith’s P(doom)

14:09 Are LLMs Turing Machines?

19:09 Liron Concedes on a Point!

21:18 Do We Need >1MB of Context?

27:02 Examples to Illustrate Keith’s Point

33:56 Is Terence Tao a Turing Machine?

38:03 Factoring Numbers: Human vs. LLM

53:24 Training LLMs with Turing-Complete Feedback

1:02:22 What Does the Pillar Problem Illustrate?

01:05:40 Boundary between LLMs and Brains

1:08:52 The 100-Year View

1:18:29 Intelligence vs. Optimization Power

1:23:13 Is Intelligence Sufficient To Take Over?

01:28:56 The Hackable Universe and AI Threats

01:31:07 Nuclear Extinction vs. AI Doom

1:33:16 Can We Just Build Narrow AI?

01:37:43 Orthogonality Thesis and Instrumental Convergence

01:40:14 Debating the Orthogonality Thesis

02:03:49 The Rocket Alignment Problem

02:07:47 Final Thoughts

Show Notes

Keith’s show: https://www.youtube.com/@MachineLearningStreetTalk

Keith’s Twitter: https://x.com/doctorduggar

Keith’s fun brain teaser that LLMs can’t solve yet, about a pillar with four holes: https://youtu.be/nO6sDk6vO0g?si=diGUY7jW4VFsV0TJ&t=3684

Eliezer Yudkowsky’s classic post about the “Rocket Alignment Problem”: https://intelligence.org/2018/10/03/rocket-alignment/

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates.

📣 You can now chat with me and other listeners in the #doom-debates channel of the PauseAI discord: https://discord.gg/2XXWXvErfA



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Getting Arrested for Barricading OpenAI's Office to Stop AI

vendredi 4 octobre 2024Durée 45:38

Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.

Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.

Is civil disobedience the right strategy to pause or stop AI?

00:00 Introducing Stop AI

00:38 Arrested at OpenAI Headquarters

01:14 Stop AI’s Funding

01:26 Blocking Entrances Strategy

03:12 Protest Logistics and Arrest

08:13 Blocking Traffic

12:52 Arrest and Legal Consequences

18:31 Commitment to Nonviolence

21:17 A Day in the Life of a Protestor

21:38 Civil Disobedience

25:29 Planning the Next Protest

28:09 Stop AI Goals and Strategies

34:27 The Ethics and Impact of AI Protests

42:20 Call to Action

Show Notes

StopAI's next protest is on October 21, 2024 at OpenAI, 575 Florida St, San Francisco, CA 94110.

StopAI Website: https://StopAI.info

StopAI Discord: https://discord.gg/gbqGUt7ZN4

Disclaimer: I (Liron) am not part of StopAI, but I am a member of PauseAI, which also has a website and Discord you can join.

PauseAI Website: https://pauseai.info

PauseAI Discord: https://discord.gg/2XXWXvErfA

There's also a special #doom-debates channel in the PauseAI Discord just for us :)

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Q&A #1 Part 2: Stock Picking, Creativity, Types of Doomers, Favorite Books

mercredi 2 octobre 2024Durée 01:09:43

This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions!

00:00 Introduction

01:20 Planning for a good outcome?

03:10 Stock Picking Advice

08:42 Dumbing It Down for Dr. Phil

11:52 Will AI Shorten Attention Spans?

12:55 Historical Nerd Life

14:41 YouTube vs. Podcast Metrics

16:30 Video Games

26:04 Creativity

30:29 Does AI Doom Explain the Fermi Paradox?

36:37 Grabby Aliens

37:29 Types of AI Doomers

44:44 Early Warning Signs of AI Doom

48:34 Do Current AIs Have General Intelligence?

51:07 How Liron Uses AI

53:41 Is “Doomer” a Good Term?

57:11 Liron’s Favorite Books

01:05:21 Effective Altruism

01:06:36 The Doom Debates Community

---

Show Notes

PauseAI Discord: https://discord.gg/2XXWXvErfA

Robin Hanson’s Grabby Aliens theory: https://grabbyaliens.com

Prof. David Kipping’s response to Robin Hanson’s Grabby Aliens: https://www.youtube.com/watch?v=tR1HTNtcYw0

My explanation of “AI completeness”, but actually I made a mistake because the term I previously coined is “goal completeness”: https://www.lesswrong.com/posts/iFdnb8FGRF4fquWnc/goal-completeness-is-like-turing-completeness-for-agi

^ Goal-Completeness (and the corresponding Shapira-Yudkowsky Thesis) might be my best/only original contribution to AI safety research, albeit a small one. Max Tegmark even retweeted it.

a16z's Ben Horowitz claiming nuclear proliferation is good, actually: https://x.com/liron/status/1690087501548126209

---

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Q&A #1 Part 1: College, Asperger's, Elon Musk, Double Crux, Liron's IQ

mardi 1 octobre 2024Durée 01:01:36

Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.

00:00 Introduction

01:17 Is OpenAI a sinking ship?

07:25 College Education

13:20 Asperger's

16:50 Elon Musk: Genius or Clown?

22:43 Double Crux

32:04 Why Call Doomers a Cult?

36:45 How I Prepare Episodes

40:29 Dealing with AI Unemployment

44:00 AI Safety Research Areas

46:09 Fighting a Losing Battle

53:03 Liron’s IQ

01:00:24 Final Thoughts

Explanation of Double Cruxhttps://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding

Best Doomer Arguments

The LessWrong sequences by Eliezer Yudkowsky: https://ReadTheSequences.com

LethalIntelligence.ai — Directory of people who are good at explaining doom

Rob Miles’ Explainer Videos: https://www.youtube.com/c/robertmilesai

For Humanity Podcast with John Sherman - https://www.youtube.com/@ForHumanityPodcast

PauseAI community — https://PauseAI.info — join the Discord!

AISafety.info — Great reference for various arguments

Best Non-Doomer Arguments

Carl Shulman — https://www.dwarkeshpatel.com/p/carl-shulman

Quintin Pope and Nora Belrose — https://optimists.ai

Robin Hanson — https://www.youtube.com/watch?v=dTQb6N3_zu8

How I prepared to debate Robin Hanson

Ideological Turing Test (me taking Robin’s side): https://www.youtube.com/watch?v=iNnoJnuOXFA

Walkthrough of my outline of prepared topics: https://www.youtube.com/watch?v=darVPzEhh-I

Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.

Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com

Podcasts Similaires Basées sur le Contenu

Découvrez des podcasts liées à Doom Debates. Explorez des podcasts avec des thèmes, sujets, et formats similaires. Ces similarités sont calculées grâce à des données tangibles, pas d'extrapolations !
My First Million
First Things THRST
The Ezra Klein Show
The Engineering Leadership Podcast
Marketing Against The Grain
The Lawfare Podcast
Mind Pump: Raw Fitness Truth
The Dr. Gabrielle Lyon Show
Better! with Dr. Stephanie
The Checkup with Doctor Mike
© My Podcast Data