PodcastsTechnologyThe MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck

Matt Turck
The MAD Podcast with Matt Turck
Latest episode

115 episodes

  • The MAD Podcast with Matt Turck

    OpenAI Board Member Zico Kolter on the Real Risks of Frontier AI

    07/05/2026 | 1h 16 mins.
    What actually happens before a frontier AI model gets released — and who decides whether it is safe enough? In this episode of The MAD Podcast, Matt Turck sits down with Zico Kolter — OpenAI board member, Head of the Machine Learning Department at Carnegie Mellon, and co-founder of Gray Swan — for a deep conversation on the real risks of frontier AI. They discuss how OpenAI’s safety oversight works before major model releases, why more powerful models do not automatically become safer, how jailbreaks and prompt injection expose real weaknesses in AI systems, why AI agents dramatically expand the attack surface, and where frontier AI is headed next. A clear, practical discussion on OpenAI, AI safety, AI security, AI agents, frontier models, red teaming, reinforcement learning, and the future of AI governance.

    (00:00) Intro
    (01:32) OpenAI board role and Safety & Security Committee
    (03:53) How OpenAI reviews major model releases
    (05:33) OpenAI’s preparedness framework explained
    (09:46) Are frontier AI models getting safer?
    (12:33) Why AI safety does not come from scale
    (15:23) The four categories of AI risk
    (19:38) Doomerism vs accelerationism in AI
    (24:11) The six-month AI pause debate
    (26:20) AI safety as a global effort
    (28:04) How Zico Kolter got into machine learning
    (31:05) OpenAI in the early days
    (34:14) Why Carnegie Mellon became an AI powerhouse
    (38:43) What Gray Swan does in AI security
    (40:44) AI safety vs AI security
    (43:15) The GCG jailbreak paper
    (49:19) How AI labs responded to jailbreak research
    (50:19) State-of-the-art AI defenses
    (52:32) State-of-the-art AI attacks
    (54:22) Why AI agents expand the attack surface
    (58:39) Are AI agents ready for production?
    (59:40) Mechanistic interpretability explained
    (1:02:31) Will AI be safer in two years?
    (1:03:46) Reinforcement learning and self-improving models
    (1:08:09) Do post-transformer architectures matter?
    (1:09:29) Best research directions in AI now
    (1:11:00) Zico Kolter’s Intro to Modern AI course
    (1:14:53) Why modern AI is simpler than people think
  • The MAD Podcast with Matt Turck

    Anthropic’s Felix Rieseberg: Claude Cowork, Mythos, and the SaaS Extinction

    10/04/2026 | 58 mins.
    Felix Rieseberg leads engineering for Claude Cowork at Anthropic, one of the most important new agentic AI products in the market today. In this episode of The MAD Podcast, Matt Turck sits down with Felix to discuss Anthropic’s newly announced Claude Mythos Preview, why Felix sees it as a genuine step-function change, and what it means when frontier AI starts showing outsized cybersecurity capabilities.

    The conversation then goes deep on Claude Cowork: how it emerged from Claude Code, what the famous “10-day” story really means, why Anthropic believes AI needs access to the local computer, and how Cowork actually works under the hood. Felix explains why skills are just text files, why memory is often just text files too, and how Anthropic thinks about building trust in AI agents.

    They also explore some of the biggest questions in AI product design and the future of software: why UX may matter as much as the model itself, why execution is becoming dramatically cheaper, what that means for product management and startups, and why Felix believes taste, alignment, and understanding humans may matter more than ever.

    (00:00) Intro
    (01:53) Claude Mythos Preview and the “step-function change”
    (06:16) Why Anthropic is treating Mythos differently
    (11:19) The real story behind Claude Cowork’s “10-day” build
    (12:42) Why Anthropic realized Claude Code needed a non-technical version
    (15:44) What Claude Cowork actually is
    (17:03) Under the hood: virtual machines, tools, skills
    (18:36) Where Cowork’s memory actually lives
    (19:26) How Cowork connects to files, apps, and the internet
    (20:45) Why Felix thinks the local computer is under-appreciated
    (24:49) Trust: how do you get users comfortable with AI agents?
    (28:45) What UX actually means for AI agents
    (31:27) Anthropic Cowork's roadmap is only one month long
    (34:12) Building 100 prototypes
    (35:10) If execution is free, what becomes the bottleneck?
    (37:25) Does it come down to taste?
    (40:12) The hardest part of building Claude Cowork
    (41:43) Advice for founders building AI agents
    (44:21) SaaSpocalypse: what’s left for software startups?
    (49:30) Where AI agents are going next
    (51:20) Regulated industries and enterprise adoption
    (54:15) Hot takes: what's underrated, overrated, and what Felix would build today
  • The MAD Podcast with Matt Turck

    AI is Already Building AI | Google DeepMind’s Mostafa Dehghani

    02/04/2026 | 1h 4 mins.
    Are we truly on the verge of AI automating its own research and development? In this deep-dive episode of the MAD Podcast, Matt Turck sits down with Mostafa Dehghani, a pioneering AI researcher at Google DeepMind whose work on Universal Transformers and Vision Transformers (ViT) helped lay the groundwork for today's frontier models.

    Moving past the hype, Mostafa breaks down the actual mechanics of "thinking in loops" and Recursive Self-Improvement (RSI). He explores the critical bottlenecks holding back true AGI—from evaluation limits and formal verification to the brutal math of long-horizon reliability.

    Mostafa and Matt also discuss the shift from pre-training to post-training, how Gemini's Nano Banana 2 processes pixels and text simultaneously, and why the "frozen" nature of today's models means Continual Learning is the next massive frontier for enterprise AI and data pipelines.

    (00:00) Intro
    (01:17) What “loops” in AI actually mean
    (05:04) Self-improvement as the next chapter of machine learning
    (07:32) Are Karpathy’s autoresearch agents an early form of AI self-improvement?
    (08:56) AI building AI: how close are we?
    (10:02) The biggest bottlenecks: evals, automation, and long horizons
    (12:36) Can formal verification unlock recursive self-improvement?
    (14:06) What is model collapse?
    (15:33) Generalization vs specialization in AI
    (18:04) What is a specialized model today?
    (20:57) Could top AI researchers themselves be automated?
    (24:02) If AI builds AI, does data matter less than compute?
    (26:22) Post-training vs pre-training: where will progress come from?
    (28:14) Why pre-training is not dead
    (29:45) What is continual learning?
    (31:53) How real is continual learning today?
    (33:43) Mostafa Dehghani’s background and path into AI
    (36:13) The story behind Universal Transformers
    (39:56) How Vision Transformers changed AI
    (43:47) Gemini, multimodality, and Nano Banana
    (47:46) Why multimodality helps build a world model
    (52:44) Why image generation is getting faster and more efficient
    (54:44) Hot takes
    (54:53) What the AI field is getting wrong
    (56:17) Why continual learning is underrated
    (57:26) Does RAG go away over time?
    (58:21) What people are too confident about in AI
    (59:56) If he were starting from scratch today
  • The MAD Podcast with Matt Turck

    Benedict Evans: OpenAI’s Moat Problem & the Future of Software

    19/03/2026 | 1h 1 mins.
    Is OpenAI trapped without a defensible moat? World-renowned independent tech analyst Benedict Evans returns to the MAD Podcast and argues that foundation models have zero network effects, making them closer to commodity infrastructure than the next iOS. We unpack OpenAI’s "mile wide, inch deep" usage problem, why simply having a "better model" does not solve the core UX challenge, and whether the hyperscalers' massive CapEx spending is a sustainable strategy or a fast track to financial gravity.

    We also explore the reality behind the recent "SaaSpocalypse", the structural shift from traditional enterprise systems to "improvised" and "ephemeral" software, and where the actual white space lies for founders and investors navigating the artificial intelligence hype cycle.

    (00:00) Intro
    (01:06) OpenAI's Focus Shift
    (03:12) ChatGPT usage: a "mile wide, inch deep"
    (09:03) Why better models do not solve the real problem
    (13:58) Why AI product teams are strategy takers, not strategy setters
    (15:38) Do agents help create defensibility?
    (20:06) OpenClaw and the "Desktop Linux" moment for AI
    (25:52) Why "everyone will build their own software" is completely wrong
    (28:09) Improvised software vs. institutionalized software
    (29:23) The Jevons Paradox: Why there will be more software, not less
    (36:15) Are we heading toward value destruction before value creation?
    (38:03) Circular revenue, leverage, and AI bubble dynamics
    (38:53) Big Tech's Trillion-Dollar CapEx Crisis & Financial Gravity
    (45:23) Why AI job exposure charts can be misleading
    (52:15) How Fortune 500 Execs are actually deploying AI today
    (56:45) The White Space: What this means for founders and investors
  • The MAD Podcast with Matt Turck

    Everything Gets Rebuilt: The New AI Agent Stack | Harrison Chase, LangChain

    12/03/2026 | 46 mins.
    Harrison Chase, co-founder and CEO of LangChain, joins the MAD Podcast to explain why everything in AI is getting rebuilt. As agents evolve from simple prompt-based systems into software that can plan, use tools, write code, manage files, and remember things over time, the real frontier is shifting from the model itself to the stack around the model. In this conversation, we go deep on harnesses, subagents, filesystems, sandboxes, observability, memory, and the new infrastructure required to make AI agents actually work in the real world.

    (00:00) Intro - meet Harrison Chase
    (01:32) What changed in agents over the last year
    (03:57) Why coding agents are ahead
    (06:26) Do models commoditize the framework layer?
    (08:27) Harnesses, in plain English
    (10:11) Why system prompts matter so much
    (13:11) The upside — and downside — of subagents
    (15:31) Why a useful agent needs a filesystem
    (18:13) The core primitives of modern agents
    (19:12) Skills: the new primitive
    (20:19) What context compaction actually means
    (23:02) How memory works in agents
    (25:16) One mega-agent or many specialized agents?
    (27:46) Has MCP won?
    (29:38) Why agents need sandboxes
    (32:35) How sandboxes help with security
    (33:32) How Harrison Chase started LangChain
    (37:24) LangChain vs LangGraph vs Deep Agents
    (40:17) Why observability matters more for agents
    (41:48) Evals, no-code, and continuous improvement
    (44:41) What LangChain is building next
    (45:29) Where the real moat in AI lives

More Technology podcasts

About The MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Podcast website

Listen to The MAD Podcast with Matt Turck, Everything Is Fake and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The MAD Podcast with Matt Turck: Podcasts in Family