Powered by RND
PodcastsTechnologyDoom Debates

Doom Debates

Liron Shapira
Doom Debates
Latest episode

Available Episodes

5 of 76
  • This $85M-Backed Founder Claims Open Source AGI is Safe — Debate with Himanshu Tyagi
    Dr. Himanshu Tyagi is a professor of engineering at the Indian Institute of Science and the co-founder of Sentient, an open-source AI platform that raised $85M in funding led by Founders Fund.In this conversation, Himanshu gives me Sentient’s pitch. Then we debate whether open-sourcing frontier AGI development is a good idea, or a reckless way to raise humanity’s P(doom).00:00 Introducing Himanshu Tyagi01:41 Sentient’s Vision05:20 How’d You Raise $85M?11:19 Comparing Sentient to Competitors27:26 Open Source vs. Closed Source AI43:01 What’s Your P(Doom)™48:44 Extinction from Superintelligent AI54:02 AI's Control Over Digital and Physical Assets01:00:26 AI's Influence on Human Movements01:08:46 Recapping the Debate01:13:17 Liron’s AnnouncementsShow NotesHimanshu’s Twitter — https://x.com/hstyagiSentient’s website — https://sentient.foundationCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.comWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:21:47
  • Emergency Episode: John Sherman FIRED from Center for AI Safety
    My friend John Sherman from the For Humanity podcast got hired by the Center for AI Safety (CAIS) two weeks ago.Today I suddenly learned he’s been fired.I’m frustrated by this decision, and frustrated with the whole AI x-risk community’s weak messaging.Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    16:43
  • Gary Marcus vs. Liron Shapira — AI Doom Debate
    Prof. Gary Marcus is a scientist, bestselling author and entrepreneur, well known as one of the most influential voices in AI. He is Professor Emeritus of Psychology and Neuroscience at NYU.  He was founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016.Gary co-authored the 2019 book, Rebooting AI: Building Artificial Intelligence We Can Trust, and the 2024 book, Taming Silicon Valley: How We Can Ensure That AI Works for Us. He played an important role in the 2023 Senate Judiciary Subcommittee Hearing on Oversight of AI, testifying with Sam Altman.In this episode, Gary and I have a lively debate about whether P(doom) is approximately 50%, or if it’s less than 1%!00:00 Introducing Gary Marcus02:33 Gary’s AI Skepticism09:08 The Human Brain is a Kluge23:16 The 2023 Senate Judiciary Subcommittee Hearing28:46 What’s Your P(Doom)™44:27 AI Timelines51:03 Is Superintelligence Real?01:00:35 Humanity’s Immune System01:12:46 Potential for Recursive Self-Improvement01:26:12 AI Catastrophe Scenarios01:34:09 Defining AI Agency01:37:43 Gary’s AI Predictions01:44:13 The NYTimes Obituary Test01:51:11 Recap and Final Thoughts01:53:35 Liron’s Outro01:55:34 Eliezer Yudkowsky’s New Book!01:59:49 AI Doom Concept of the DayShow NotesGary’s Substack — https://garymarcus.substack.comGary’s Twitter — https://x.com/garymarcusIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.comCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    2:04:01
  • Mike Israetel vs. Liron Shapira — AI Doom Debate
     Dr. Mike Israetel, renowned exercise scientist and social media personality, and more recently a low-P(doom) AI futurist, graciously offered to debate me!00:00 Introducing Mike Israetel12:19 What’s Your P(Doom)™30:58 Timelines for Artificial General Intelligence34:49 Superhuman AI Capabilities43:26 AI Reasoning and Creativity47:12 Evil AI Scenario01:08:06 Will the AI Cooperate With Us?01:12:27 AI's Dependence on Human Labor01:18:27 Will AI Keep Us Around to Study Us?01:42:38 AI's Approach to Earth's Resources01:53:22 Global AI Policies and Risks02:03:02 The Quality of Doom Discourse02:09:23 Liron’s OutroShow Notes* Mike’s Instagram — https://www.instagram.com/drmikeisraetel* Mike’s YouTube — https://www.youtube.com/@MikeIsraetelMakingProgressCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    2:15:10
  • Doom Scenario: Human-Level AI Can't Control Smarter AI
    I want to be transparent about how I’ve updated my mainline AI doom scenario in light of safe & useful LLMs. So here’s where I’m at…00:00 Introduction07:59 The Dangerous Threshold to Runaway Superintelligence18:57 Superhuman Goal Optimization = Infinite Time Horizon21:21 Goal-Completeness by Analogy to Turing-Completeness26:53 Intellidynamics29:13 Goal-Optimization Is Convergent31:15 Early AIs Lose Control of Later AIs34:46 The Superhuman Threshold Is Real38:27 Expecting Rapid FOOM40:20 Rocket Alignment49:59 Stability of Values Under Self-Modification53:13 The Way to Heaven Passes Right By Hell57:32 My Mainline Doom Scenario01:17:46 What Values Does The Goal Optimizer Have?Show NotesMy recent episode with Jim Babcock on this same topic of mainline doom scenarios — https://www.youtube.com/watch?v=FaQjEABZ80gThe Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problemCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:24:12

More Technology podcasts

About Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Podcast website

Listen to Doom Debates, The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.18.2 | © 2007-2025 radio.de GmbH
Generated: 5/24/2025 - 4:30:23 AM