Powered by RND
PodcastsTechnologyThe MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck

Matt Turck
The MAD Podcast with Matt Turck
Latest episode

Available Episodes

5 of 78
  • Box CEO: AI Agents Explained - Real Use Cases, Challenges & What’s Next | Aaron Levie
    In this episode, we sit down with Aaron Levie, CEO and co-founder of Box, for a wide-ranging conversation that’s equal parts insightful, technical, and fun. We kick things off with a candid discussion about what it’s like to be a public company CEO during times of volatility, and then rewind to the early days of Box — from dorm room experiments to cold emailing Mark Cuban and dropping out of college.From there, we dive deep into how AI is transforming the enterprise. Aaron shares how Box is layering AI agents, RAG systems, and model orchestration on top of decades of enterprise content infrastructure — and why “95% of enterprise data is underutilized.”We explore what’s actually working with AI in production, what’s still breaking, and how companies can avoid common pitfalls. From building hubs for document-specific RAG to thinking through agent-to-agent interoperability, Aaron unpacks the architecture of Box’s AI platform — and why they’re staying out of the model training wars entirely. We also dig into AI culture inside large organizations, the trade-offs of going public, and why Levie believes every enterprise interface is about to change.Whether you're a founder, engineer, enterprise buyer, or just trying to figure out how AI agents will reshape knowledge work, this conversation is full of practical insights and candid takes from one of the sharpest minds in tech.BoxWebsite - https://www.box.comX/Twitter - https://twitter.com/BoxAaron LevieLinkedIn - https://www.linkedin.com/in/boxaaronX/Twitter - https://x.com/levieFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro(01:51) Navigating uncertainty as a public company CEO(14:48) The Box origin story: college, cold emails, and Mark Cuban(23:39) Cloud transformation vs. the AI wave(30:15) The reality of AI in the enterprise: proof of concept vs. deployment(34:37) Inside Box’s AI platform: Hubs, agents, and more(44:15) Why Box won’t build its own model (and the dangers of fine-tuning)(51:51) What’s working — and what’s not — with AI agents(1:04:42) Building an AI culture at Box(1:13:22) The future of enterprise software and Box’s roadmap
    --------  
    1:15:46
  • Inside the Mind of Snowflake’s CEO: Bold Bets in the AI Arms Race
    In this episode, we sit down with Sridhar Ramaswamy, CEO of Snowflake, for an in-depth conversation about the company’s transformation from a cloud analytics platform into a comprehensive AI data cloud. Sridhar shares insights on Snowflake’s shift toward open formats like Apache Iceberg and why monetizing storage was, in his view, a strategic misstep.We also dive into Snowflake’s growing AI capabilities, including tools like Cortex Analyst and Cortex Search, and discuss how the company scaled AI deployments at an impressive pace. Sridhar reflects on lessons from his previous startup, Neeva, and offers candid thoughts on the search landscape, the future of BI tools, real-time analytics, and why partnering with OpenAI and Anthropic made more sense than building Snowflake’s own foundation models.SnowflakeWebsite - https://www.snowflake.comX/Twitter - https://x.com/snowflakedbSridhar RamaswamyLinkedIn - https://www.linkedin.com/in/sridhar-ramaswamyX/Twitter - https://x.com/RamaswmySridharFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro and current market tumult(02:48) The evolution of Snowflake from IPO to Today(07:22) Why Snowflake’s earliest adopters came from financial services(15:33) Resistance to change and the philosophical gap between structured data and AI(17:12) What is the AI Data Cloud?(23:15) Snowflake’s AI agents: Cortex Search and Cortex Analyst(25:03) How did Sridhar’s experience at Google and Neeva shape his product vision?(29:43) Was Neeva simply ahead of its time?(38:37) The Epiphany mafia(40:08) The current state of search and Google’s conundrum(46:45) “There’s no AI strategy without a data strategy”(56:49) Embracing Open Data Formats with Iceberg(01:01:45) The Modern Data Stack and the future of BI(01:08:22) The role of real-time data(01:11:44) Current state of enterprise AI: from PoCs to production(01:17:54) Building your own models vs. using foundation models(01:19:47) Deepseek and open source AI(01:21:17) Snowflake’s 1M Minds program(01:21:51) Snowflake AI Hub
    --------  
    1:23:41
  • Beyond Brute Force: Chollet & Knoop on ARC AGI 2, the Benchmark Breaking LLMs and the Search for True Machine Intelligence
    In this fascinating episode, we dive deep into the race towards true AI intelligence, AGI benchmarks, test-time adaptation, and program synthesis with star AI researcher (and philosopher) Francois Chollet, creator of Keras and the ARC AGI benchmark, and Mike Knoop, co-founder of Zapier and now co-founder with Francois of both the ARC Prize and the research lab Ndea. With the launch of ARC Prize 2025 and ARC-AGI 2, they explain why existing LLMs fall short on true intelligence tests, how new models like O3 mark a step change in capabilities, and what it will really take to reach AGI.We cover everything from the technical evolution of ARC 1 to ARC 2, the shift toward test-time reasoning, and the role of program synthesis as a foundation for more general intelligence. The conversation also explores the philosophical underpinnings of intelligence, the structure of the ARC Prize, and the motivation behind launching Ndea — a ew AGI research lab that aims to build a "factory for rapid scientific advancement." Whether you're deep in the AI research trenches or just fascinated by where this is all headed, this episode offers clarity and inspiration.NdeaWebsite - https://ndea.comX/Twitter - https://x.com/ndeaARC PrizeWebsite - https://arcprize.orgX/Twitter - https://x.com/arcprizeFrançois CholletLinkedIn - https://www.linkedin.com/in/fcholletX/Twitter - https://x.com/fcholletMike KnoopX/Twitter - https://x.com/mikeknoopFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro (01:05) Introduction to ARC Prize 2025 and ARC-AGI 2 (02:07) What is ARC and how it differs from other AI benchmarks (02:54) Why current models struggle with fluid intelligence (03:52) Shift from static LLMs to test-time adaptation (04:19) What ARC measures vs. traditional benchmarks (07:52) Limitations of brute-force scaling in LLMs (13:31) Defining intelligence: adaptation and efficiency (16:19) How O3 achieved a massive leap in ARC performance (20:35) Speculation on O3's architecture and test-time search (22:48) Program synthesis: what it is and why it matters (28:28) Combining LLMs with search and synthesis techniques (34:57) The ARC Prize structure: efficiency track, private vs. public (42:03) Open source as a requirement for progress (44:59) What's new in ARC-AGI 2 and human benchmark testing (48:14) Capabilities ARC-AGI 2 is designed to test (49:21) When will ARC-AGI 2 be saturated? AGI timelines (52:25) Founding of NDEA and why now (54:19) Vision beyond AGI: a factory for scientific advancement (56:40) What NDEA is building and why it's different from LLM labs (58:32) Hiring and remote-first culture at NDEA (59:52) Closing thoughts and the future of AI research
    --------  
    1:00:45
  • Why This Ex-Meta Leader is Rethinking AI Infrastructure | Lin Qiao, CEO, Fireworks AI
    In 2022, Lin Qiao decided to leave Meta, where she was managing several hundred engineers, to start Fireworks AI. In this episode, we sit down with Lin for a deep dive on her work, starting with her leadership on PyTorch, now one of the most influential machine learning frameworks in the industry, powering research and production at scale across the AI industry. Now at the helm of Fireworks AI, Lin is leading a new wave in generative AI infrastructure, simplifying model deployment and optimizing performance to empower all developers building with Gen AI technologies.We dive into the technical core of Fireworks AI, uncovering their innovative strategies for model optimization, Function Calling in agentic development, and low-level breakthroughs at the GPU and CUDA layers.Fireworks AIWebsite - https://fireworks.aiX/Twitter - https://twitter.com/FireworksAI_HQLin QiaoLinkedIn - https://www.linkedin.com/in/lin-qiao-22248b4X/Twitter - https://twitter.com/lqiaoFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro(01:20) What is Fireworks AI?(02:47) What is PyTorch?(12:50) Traditional ML vs GenAI(14:54) AI’s enterprise transformation(16:16) From Meta to Fireworks(19:39) Simplifying AI infrastructure(20:41) How Fireworks clients use GenAI(22:02) How many models are powered by Fireworks(30:09) LLM partitioning(34:43) Real-time vs pre-set search(36:56) Reinforcement learning(38:56) Function calling(44:23) Low-level architecture overview(45:47) Cloud GPUs & hardware support(47:16) VPC vs on-prem vs local deployment(49:50) Decreasing inference costs and its business implications(52:46) Fireworks roadmap(55:03) AI future predictions
    --------  
    59:14
  • Top AI Researcher on GPT 4.5, DeepSeek and Agentic RAG | Douwe Kiela, CEO, Contextual AI
    Retrieval-Augmented Generation (RAG) has become a dominant architecture in modern AI deployments, and in this episode, we sit down with Douwe Kiela, who co-authored the original RAG paper in 2020. Douwe is now the founder and CEO of Contextual AI, a startup focusing on helping enterprises deploy RAG as an agentic system. We start the conversation with Douwe's thoughts on the very latest advancements in Generative AI, including GPT 4.5, DeepSeek and the exciting paradigm shift towards test time compute, as well as the US-China rivalry in AI. We then dive into RAG: definition, origin story and core architecture. Douwe explains the evolution of RAG into RAG 2.0 and Agentic RAG, emphasizing the importance of self-learning systems over individual models and the role of synthetic data. We close with the challenges and opportunities of deploying AI in real-world enterprise, discussing the balance between accuracy and the inherent inaccuracies of AI systems.Contextual AIWebsite - https://contextual.aiX/Twitter - https://x.com/ContextualAIDouwe KielaLinkedIn - https://www.linkedin.com/in/douwekielaX/Twitter - https://x.com/douwekielaFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Intro(01:57) Thoughts on the latest AI models: GPT-4.5, Sonnet 3.7, Grok 3(04:50) The test time compute paradigm shift(06:47) Unsupervised learning vs reasoning: a false dichotomy(07:30) The significance of DeepSeek(10:29) USA vs. China: is the AI war overblown?(12:19) Controlling AI hallucinations at the model level(13:51) RAG: definition and origin story(18:46) Why the Transformers paper initially felt underwhelming(20:41) The core architecture of RAG(26:06) RAG vs. fine-tuning vs. long context windows(30:53) RAG 2.0: Thinking in systems and not models(31:28) Data extraction and data curation for RAG(35:59) Contextual Language Models (CLMs)(38:04) Finetuning and alignment techniques: GRIT, KTO, LENS(40:40) Agentic RAG(41:36) General vs. specialized RAG agents(44:35) Synthetic data in AI(45:51) Deploying AI in the enterprise(48:07) How tolerant are enterprises to AI hallucinations?(49:35) The future of Contextual AI
    --------  
    50:44

More Technology podcasts

About The MAD Podcast with Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Podcast website

Listen to The MAD Podcast with Matt Turck, Levittown and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features
Social
v7.15.0 | © 2007-2025 radio.de GmbH
Generated: 4/20/2025 - 9:59:09 AM