PodcastsTechnologyThe Neuron: AI Explained

The Neuron: AI Explained

The Neuron
The Neuron: AI Explained
Latest episode

57 episodes

  • The Neuron: AI Explained

    OpenAI Researcher Explains How AI Hides Its Thinking (w/ OpenAI’s Bowen Baker)

    23/1/2026 | 55 mins.
    AI reasoning models don’t just give answers — they plan, deliberate, and sometimes try to cheat.

    In this episode of The Neuron, we’re joined by Bowen Baker, Research Scientist at OpenAI, to explore whether we can monitor AI reasoning before things go wrong — and why that transparency may not last forever.

    Bowen walks us through real examples of AI reward hacking, explains why monitoring chain-of-thought is often more effective than checking outputs, and introduces the idea of a “monitorability tax” — trading raw performance for safety and transparency.

    We also cover:
    Why smaller models thinking longer can be safer than bigger models

    How AI systems learn to hide misbehavior

    Why suppressing “bad thoughts” can backfire

    The limits of chain-of-thought monitoring

    Bowen’s personal view on open-source AI and safety risks

    If you care about how AI actually works — and what could go wrong — this conversation is essential.

    Resources:
    Title URL
    Evaluating chain-of-thought monitorability | OpenAI https://openai.com/index/evaluating-chain-of-thought-monitorability/
    Understanding neural networks through sparse circuits | OpenAI https://openai.com/index/understanding-neural-networks-through-sparse-circuits/
    OpenAI's alignment blog: https://alignment.openai.com/
    👉 Subscribe for more interviews with the people building AI
    👉 Join the newsletter at https://theneuron.ai
  • The Neuron: AI Explained

    The Hidden Cost of AI Agents No One Talks About

    20/1/2026 | 1h
    Everyone is rushing to build AI agents — but most companies are setting themselves up for failure.

    In this episode of The Neuron, Darin Patterson, VP of Market Strategy at Make, explains why agentic AI only works if your automation foundation is solid first. We break down when to use deterministic workflows vs AI agents, how to avoid fragile automation sprawl, and why visibility into your entire automation landscape is now mission-critical.

    You’ll see real examples of building agents in Make, how Model Context Protocol (MCP) fits into modern workflows, and why orchestration — not hype — is the real unlock for scaling AI safely inside organizations.

    Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
  • The Neuron: AI Explained

    Why IBM Wants AI to Be Boring

    13/1/2026 | 53 mins.
    IBM just released Granite 4.0, a new family of open language models designed to be fast, memory-efficient, and enterprise-ready — and it represents a very different philosophy from today’s frontier AI race.

    In this episode of The Neuron, IBM Research’s David Cox joins us to unpack why IBM treats AI models as tools rather than entities, how hybrid architectures dramatically reduce memory and cost, and why openness, transparency, and external audits matter more than ever for real-world deployment.

    We dive into long-context efficiency, agent safety, LoRA adapters, on-device AI, voice interfaces, and why the future of AI may look a lot more boring — in the best possible way.

    If you’re building AI systems for production, agents, or enterprise workflows, this conversation is required listening.

    Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
  • The Neuron: AI Explained

    This AI Grows a Brain During Training (Pathway’s AI w/ Zuzanna Stamirowska)

    06/1/2026 | 48 mins.
    Imagine an AI that doesn’t just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world’s first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.

    Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway’s architecture introduces true temporal reasoning and continual learning.

    We explore:
    • Why Transformers lack real memory and time awareness
    • How BDH uses brain-like neurons, synapses, and emergent structure
    • How models can “get bored,” adapt, and strengthen connections
    • Why Pathway sees reasoning — not language — as the core of intelligence
    • How BDH enables infinite context, live learning, and interpretability
    • Why gluing two trained models together actually works in BDH
    • The path to AGI through generalization, not scaling
    • Real-world early adopters (Formula 1, NATO, French Postal Service)
    • Safety, reversibility, checkpointing, and building predictable behavior
    • Why this architecture could power the next era of scientific innovation

    From brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.

    If you want a window into what comes after LLMs, this interview is essential.

    Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai
  • The Neuron: AI Explained

    This 24-Year-Old Raised $64M to Build an AI Smarter Than the World's Best Mathematicians

    30/12/2025 | 59 mins.
    Carina Hong dropped out of Stanford's PhD program to build "mathematical superintelligence" — and just raised $64M to do it. In this episode, we explore what that actually means: an AI that doesn't just solve math problems but discovers new theorems, proves them formally, and gets smarter with each iteration. Carina explains how her team solved a 130-year-old problem about Lyapunov functions, disproved a 30-year-old graph theory conjecture, and why math is the secret "bedrock" for everything from chip design to quant trading to coding agents. We also discuss the fascinating connections between neuroscience, AI, and mathematics.

    Lean more about Axiom: https://axiommath.ai/ 

    Subscribe to The Neuron newsletter: https://theneuron.ai

More Technology podcasts

About The Neuron: AI Explained

The Neuron covers the latest AI developments, trends and research, hosted by Grant Harvey and Corey Noles. Digestible, informative and authoritative takes on AI that get you up to speed and help you become an authority in your own circles. Available every Tuesday on all podcasting platforms and YouTube. Subscribe to our newsletter: https://www.theneurondaily.com/subscribe
Podcast website

Listen to The Neuron: AI Explained, Acquired and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Neuron: AI Explained: Podcasts in Family

Social
v8.3.1 | © 2007-2026 radio.de GmbH
Generated: 1/27/2026 - 3:58:32 PM