Powered by RND
PodcastsTechnologyThe MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck

Matt Turck
The MAD Podcast with Matt Turck
Latest episode

Available Episodes

5 of 101
  • Can America Win the Open Source AI Race? — Olmo 3 with AI2’s Nathan Lambert & Luca Soldaini
    In this special release episode, Matt sits down with Nathan Lambert and Luca Soldaini from AI2 (the Allen Institute for AI) to break down one of the biggest open-source AI drops of the year: OLMo 3. At a moment when most labs are offering “open weights” and calling it a day, AI2 is doing the opposite — publishing the models, the data, the recipes, and every intermediate checkpoint that shows how the system was built. It’s an unusually transparent look into the inner machinery of a modern frontier-class model.Nathan and Luca walk us through the full pipeline — from pre-training and mid-training to long-context extension, SFT, preference tuning, and RLVR. They also explain what a thinking model actually is, why reasoning models have exploded in 2025, and how distillation from DeepSeek and Qwen reasoning models works in practice. If you’ve been trying to truly understand the “RL + reasoning” era of LLMs, this is the clearest explanation you’ll hear.We widen the lens to the global picture: why Meta’s retreat from open source created a “vacuum of influence,” how Chinese labs like Qwen, DeepSeek, Kimi, and Moonshot surged into that gap, and why so many U.S. companies are quietly building on Chinese open models today. Nathan and Luca offer a grounded, insider view of whether America can mount an effective open-source response — and what that response needs to look like.Finally, we talk about where AI is actually heading. Not the hype, not the doom — but the messy engineering reality behind modern model training, the complexity tax that slows progress, and why the transformation between now and 2030 may be dramatic without ever delivering a single “AGI moment.” If you care about the future of open models and the global AI landscape, this is an essential conversation.Allen Institute for AI (AI2)Website - https://allenai.orgX/Twitter - https://x.com/allen_aiNathan LambertBlog - https://www.interconnects.aiLinkedIn - https://www.linkedin.com/in/natolambert/X/Twitter - https://x.com/natolambertLuca SoldainiBlog - https://soldaini.netLinkedIn - https://www.linkedin.com/in/soldni/X/Twitter - https://x.com/soldniFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) – Cold Open(00:39) – Welcome & today’s big announcement(01:18) – Introducing the Olmo 3 model family(02:07) – What “base models” really are (and why they matter)(05:51) – Dolma 3: the data behind Olmo 3(08:06) – Performance vs Qwen, Gemma, DeepSeek(10:28) – What true open source means (and why it’s rare)(12:51) – Intermediate checkpoints, transparency, and why AI2 publishes everything(16:37) – Why Qwen is everywhere (including U.S. startups)(18:31) – Why Chinese labs go open source (and why U.S. labs don’t)(20:28) – Inside ATOM: the U.S. response to China’s model surge(22:13) – The rise of “thinking models” and inference-time scaling(35:58) – The full Olmo pipeline, explained simply(46:52) – Pre-training: data, scale, and avoiding catastrophic spikes(50:27) – Mid-training (tail patching) and avoiding test leakage(52:06) – Why long-context training matters(55:28) – SFT: building the foundation for reasoning(1:04:53) – Preference tuning & why DPO still works(1:10:51) – The hard part: RLVR, long reasoning chains, and infrastructure pain(1:13:59) – Why RL is so technically brutal(1:18:17) – Complexity tax vs AGI hype(1:21:58) – How everyone can contribute to the future of AI(1:27:26) – Closing thoughts
    --------  
    1:28:10
  • Intelligence Isn’t Enough: Why Energy & Compute Decide the AGI Race – Eiso Kant
    Frontier AI is colliding with real-world infrastructure. Eiso Kant (Co-CEO & Co-Founder, Poolside) joins the MAD Podcast to unpack Project Horizon— a multi-gigawatt West Texas build—and why frontier labs must own energy, compute, and intelligence to compete. We map token economics, cloud-style margins, and the staged 250 MW rollout using 2.5 MW modular skids.Then we get operational: the CoreWeave anchor partnership, environmental choices (SCR, renewables + gas + batteries), community impact, and how Poolside plans to bring capacity online quickly without renting away margin—plus the enterprise motion (defense to Fortune 500) powered by forward deployed research engineers.Finally, we go deep on training. Eiso lays out RL2L (Reinforcement Learning to Learn)— aimed at reverse-engineering the web’s thoughts and actions— why intelligence may commoditize, what that means for agents, and how coding served as a proxy for long-horizon reasoning before expanding to broader knowledge work.PoolsideWebsite - https://poolside.aiX/Twitter - https://x.com/poolsideaiEiso KantLinkedIn - https://www.linkedin.com/in/eisokant/X/Twitter - https://x.com/eisokantFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Cold open – “Intelligence becomes a commodity”(00:23) Host intro – Project Horizon & RL2L(01:19) Why Poolside exists amid frontier labs(04:38) Project Horizon: building one of the largest US data center campuses(07:20) Why own infra: scale, cost, and avoiding “cosplay”(10:06) Economics deep dive: $8B for 250 MW, capex/opex, margins(16:47) CoreWeave partnership: anchor tenant + flexible scaling(18:24) Hiring the right tail: building a physical infra org(30:31) RL today → agentic RL and long-horizon tasks(37:23) RL2L revealed: reverse-engineering the web’s thoughts & actions(39:32) Continuous learning and the “hot stove” limitation(43:30) Agents debate: thin wrappers, differentiation, and model collapse(49:10) “Is AI plateauing?”—chip cycles, scale limits, and new axes(53:49) Why software was the proxy; expanding to enterprise knowledge work(55:17) Model status: Malibu → Laguna (small/medium/large)(57:31) Poolside's Commercial Reality today: defense; Fortune 500; FDRE (1:02:43) Global team, avoiding the echo chamber(1:04:34) Next 12–18 months: frontier models + infra scale(1:05:52) Closing
    --------  
    1:06:28
  • State of AI 2025 with Nathan Benaich: Power Deals, Reasoning Breakthroughs, Real Revenue
    Power is the new bottleneck, reasoning got real, and the business finally caught up. In this wide-ranging conversation, I sit down with Nathan Benaich, Founder and General Partner at Air Street Capital, to discuss the newly published 2025 State of AI report—what’s actually working, what’s hype, and where the next edge will come from. We start at the physical layer: energy procurement, PPAs, off-grid builds, and why water and grid constraints are turning power—not GPUs—into the decisive moat.From there, we move into capability: reasoning models acting as AI co-scientists in verifiable domains, and the “chain-of-action” shift in robotics that’s taking us from polished demos to dependable deployments. Along the way, we examine the market reality—who’s making real revenue, how margins actually behave once tokens and inference meet pricing, and what all of this means for builders and investors.We also zoom out to the ecosystem: NVIDIA’s position vs. custom silicon, China’s split stack, and the rise of sovereign AI (and the “sovereignty washing” that comes with it). The policy and security picture gets a hard look too—regulation’s vibe shift, data-rights realpolitik, and what agents and MCP mean for cyber risk and adoption.Nathan closes with where he’s placing bets (bio, defense, robotics, voice) and three predictions for the next 12 months. Nathan BenaichBlog - https://www.nathanbenaich.comX/Twitter - https://x.com/nathanbenaichSource: State of AI Report 2025 (9/10/2025)Air Street CapitalWebsite - https://www.airstreet.comX/Twitter - https://x.com/airstreetMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(0:00) – Cold Open: “Gargantuan money, real reasoning”(0:40) – Intro: State of AI 2025 with Nathan Benaich(02:06) – Reasoning got real: from chain-of-thought to verified math wins(04:11) – AI co-scientist: hypotheses, wet-lab validation, fewer “dumb stochastic parrots” (04:44) – Chain-of-action robotics: plan → act you can audit(05:13) – Humanoids vs. warehouse reality: where robots actually stick first(06:32) – The business caught up: who’s making real revenue now(08:26) – Adoption & spend: Ramp stats, retention, and the shadow-AI gap(11:00) – Margins debate: tokens, pricing, and the thin-wrapper trap(14:02) – Bubble or boom? Wall Street vs. SF vibes (and circular deals)(19:54) – Power is the bottleneck: $50B/GW capex and the new moat(21:02) – PPAs, gas turbines, and off-grid builds: the procurement game(23:54) – Water, grids, and NIMBY: sustainability gets political(25:08) – NVIDIA’s moat: 90% of papers, Broadcom/AMD, and custom silicon(28:47) – China split-stack: Huawei, Cambricon, and export zigzags(30:30) – Sovereign AI or “sovereignty washing”? Open source as leverage(40:40) – Regulation & safety: from Bletchley to “AI Action”—the vibe shift(44:06) – Safety budgets vs. lab spend; models that game evals(44:46) – Data rights realpolitik: $1.5B signals the new training cost(47:04) – Cyber risk in the agent era: MCP, malware LMs, state actors(50:19) – Agents that convert: search → commerce and the demo flywheel(54:18) – VC lens: where Nathan is investing (bio, defense, robotics, voice)(68:29) – Predictions: power politics, AI neutrality, end-to-end discoveries(1:02:13) – Wrap: what to watch next & where to find the report (stateof.ai)
    --------  
    1:03:15
  • Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)
    Are we failing to understand the exponential, again?My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today’s AI models can spark alien insights in code, math, and science—including Julian’s timeline for when AI could produce Nobel-level breakthroughs.We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart’s law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic’s launch process.Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.Julian SchrittwieserBlog - https://www.julian.acX/Twitter - https://x.com/mononofuViral post: Failing to understand the exponential, again (9/27/2025)AnthropicWebsite - https://www.anthropic.comX/Twitter - https://x.com/anthropicaiMatt Turck (Managing Director)Blog - https://www.mattturck.comLinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturckFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCap(00:00) Cold open — “We’re not seeing any slowdown.”(00:32) Intro — who Julian is & what we cover(01:09) The “exponential” from inside frontier labs(04:46) 2026–2027: agents that work a full day; expert-level breadth(08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value(10:26) Move 37 — what actually happened and why it mattered(13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?(16:25) Discontinuity vs smooth progress (and warning signs)(19:08) Does pre-training + RL get us there? (AGI debates aside)(20:55) Sutton’s “RL from scratch”? Julian’s take(23:03) Julian’s path: Google → DeepMind → Anthropic(26:45) AlphaGo (learn + search) in plain English(30:16) AlphaGo Zero (no human data)(31:00) AlphaZero (one algorithm: Go, chess, shogi)(31:46) MuZero (planning with a learned world model)(33:23) Lessons for today’s agents: search + learning at scale(34:57) Do LLMs already have implicit world models?(39:02) Why RL on LLMs took time (stability, feedback loops)(41:43) Compute & scaling for RL — what we see so far(42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards(44:36) RL training data & the “flywheel” (and why quality matters)(48:02) RL & Agents 101 — why RL unlocks robustness(50:51) Should builders use RL-as-a-service? Or just tools + prompts?(52:18) What’s missing for dependable agents (capability vs engineering)(53:51) Evals & Goodhart — internal vs external benchmarks(57:35) Mechanistic interpretability & “Golden Gate Claude”(1:00:03) Safety & alignment at Anthropic — how it shows up in practice(1:03:48) Jobs: human–AI complementarity (comparative advantage)(1:06:33) Inequality, policy, and the case for 10× productivity → abundance(1:09:24) Closing thoughts
    --------  
    1:09:56
  • How GPT-5 Thinks — OpenAI VP of Research Jerry Tworek
    What does it really mean when GPT-5 “thinks”? In this conversation, OpenAI’s VP of Research Jerry Tworek explains how modern reasoning models work in practice—why pretraining and reinforcement learning (RL/RLHF) are both essential, what that on-screen “thinking” actually does, and when extra test-time compute helps (or doesn’t). We trace the evolution from O1 (a tech demo good at puzzles) to O3 (the tool-use shift) to GPT-5 (Jerry calls it “03.1-ish”), and talk through verifiers, reward design, and the real trade-offs behind “auto” reasoning modes.We also go inside OpenAI: how research is organized, why collaboration is unusually transparent, and how the company ships fast without losing rigor. Jerry shares the backstory on competitive-programming results like ICPC, what they signal (and what they don’t), and where agents and tool use are genuinely useful today. Finally, we zoom out: could pretraining + RL be the path to AGI? This is the MAD Podcast —AI for the 99%. If you’re curious about how these systems actually work (without needing a PhD), this episode is your map to the current AI frontier.OpenAIWebsite - https://openai.comX/Twitter - https://x.com/OpenAIJerry TworekLinkedIn - https://www.linkedin.com/in/jerry-tworek-b5b9aa56X/Twitter - https://x.com/millionintFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro(01:01) What Reasoning Actually Means in AI(02:32) Chain of Thought: Models Thinking in Words(05:25) How Models Decide Thinking Time(07:24) Evolution from O1 to O3 to GPT-5(11:00) Before OpenAI: Growing up in Poland, Dropping out of School, Trading(20:32) Working on Robotics and Rubik's Cube Solving(23:02) A Day in the Life: Talking to Researchers(24:06) How Research Priorities Are Determined(26:53) Collaboration vs IP Protection at OpenAI(29:32) Shipping Fast While Doing Deep Research(31:52) Using OpenAI's Own Tools Daily(32:43) Pre-Training Plus RL: The Modern AI Stack(35:10) Reinforcement Learning 101: Training Dogs(40:17) The Evolution of Deep Reinforcement Learning(42:09) When GPT-4 Seemed Underwhelming at First(45:39) How RLHF Made GPT-4 Actually Useful(48:02) Unsupervised vs Supervised Learning(49:59) GRPO and How DeepSeek Accelerated US Research(53:05) What It Takes to Scale Reinforcement Learning(55:36) Agentic AI and Long-Horizon Thinking(59:19) Alignment as an RL Problem(1:01:11) Winning ICPC World Finals Without Specific Training(1:05:53) Applying RL Beyond Math and Coding(1:09:15) The Path from Here to AGI(1:12:23) Pure RL vs Language Models
    --------  
    1:16:04

More Technology podcasts

About The MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Podcast website

Listen to The MAD Podcast with Matt Turck, The Last Invention and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The MAD Podcast with Matt Turck: Podcasts in Family

Social
v7.23.13 | © 2007-2025 radio.de GmbH
Generated: 11/20/2025 - 8:40:11 PM