PodcastsTechnologyFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Latest episode

487 episodes

  • Future of Life Institute Podcast

    How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)

    07/1/2026 | 1h 20 mins.

    Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.LINKS:Nora Ammann siteARIA safeguarded AI program pageAI Resilience official siteGradual Disempowerment websiteCHAPTERS:(00:00) Episode Preview(01:00) Slow takeoff expectations(08:13) Domination versus chaos(17:18) Human-AI coalitions vision(28:14) Scaling oversight and agents(38:45) Formal specs and guarantees(51:10) Resilience in AI era(01:02:21) Defense-favored cyber systems(01:10:37) AI-enabled bargaining and tradePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  • Future of Life Institute Podcast

    How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)

    23/12/2025 | 1h 18 mins.

    David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.LINKS:David Duvenaud academic homepageGradual DisempowermentThe Post-AGI WorkshopPost-AGI Studies DiscordCHAPTERS:(00:00) Episode Preview(01:05) Introducing gradual disempowerment(06:06) Obsolete labor and UBI(14:29) Property, power, and control(23:38) Culture shifts toward AIs(34:34) States misalign without people(44:15) Competition and preservation tradeoffs(53:03) Building post-AGI studies(01:02:29) Forecasting and coordination tools(01:10:26) Human values and futuresPRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  • Future of Life Institute Podcast

    Why the AI Race Undermines Safety (with Steven Adler)

    12/12/2025 | 1h 28 mins.

    Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models. LINKS:Steven Adler's Substack: https://stevenadler.substack.comCHAPTERS:(00:00) Episode Preview(01:00) Race Dynamics And Safety(18:03) Chatbots And Mental Health(30:42) Models Outsmart Safety Tests(41:01) AI Swarms And Work(54:21) Human Bottlenecks And Oversight(01:06:23) Animals And Superintelligence(01:19:24) Safety Capabilities And GovernancePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  • Future of Life Institute Podcast

    Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)

    27/11/2025 | 1h 1 mins.

    Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.LINKS:The Midas Project WebsiteTyler Johnston's LinkedIn ProfileCHAPTERS:(00:00) Episode Preview(01:06) Introducing the Midas Project(05:01) Shining a Light on AI(08:36) Industry Lockdown and Transparency(13:45) The OpenAI Files(20:55) Subpoenaed by OpenAI(29:10) Responding to the Subpoena(37:41) The Case for Transparency(44:30) Pricing Risk and Regulation(52:15) Measuring Transparency and Auditing(57:50) Hope for the FuturePRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

  • Future of Life Institute Podcast

    We're Not Ready for AGI (with Will MacAskill)

    14/11/2025 | 2h 3 mins.

    William MacAskill is a senior research fellow at Forethought. He joins the podcast to discuss his Better Futures essay series. We explore moral error risks, AI character design, space governance, and persistent path dependence. The conversation also covers risk-averse AI systems, moral trade between value systems, and improving model specifications for ethical reasoning.LINKS:- Better Futures Research Series: https://www.forethought.org/research/better-futures- William MacAskill Forethought Profile: https://www.forethought.org/people/william-macaskillCHAPTERS:(00:00) Episode Preview(01:03) Improving The Future's Quality(09:58) Moral Errors and AI Rights(18:24) AI's Impact on Thinking(27:17) Utopias and Population Ethics(36:41) The Danger of Moral Lock-in(44:38) Deals with Misaligned AI(57:25) AI and Moral Trade(01:08:21) Improving AI Ethical Reasoning(01:16:05) The Risk of Path Dependence(01:27:41) Avoiding Future Lock-in(01:36:22) The Urgency of Space Governance(01:46:19) A Future Research Agenda(01:57:36) Is Intelligence a Good Bet?PRODUCED BY:https://aipodcast.ingSOCIAL LINKS:Website: https://podcast.futureoflife.orgTwitter (FLI): https://x.com/FLI_orgTwitter (Gus): https://x.com/gusdockerLinkedIn: https://www.linkedin.com/company/future-of-life-institute/YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/Apple: https://geo.itunes.apple.com/us/podcast/id1170991978Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

More Technology podcasts

About Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Podcast website

Listen to Future of Life Institute Podcast, Shell Game and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Future of Life Institute Podcast: Podcasts in Family

Social
v8.2.2 | © 2007-2026 radio.de GmbH
Generated: 1/7/2026 - 10:37:50 PM