PodcastsTechnologyCoding Chats

Coding Chats

John Crickett
Coding Chats
Latest episode

81 episodes

  • Coding Chats

    I got into computers to avoid people then they put me in charge of them!

    14/05/2026 | 49 mins.
    Coding Chats Episode 78 - John Crickett talks to Robert Harris, an experienced engineering leader. Robert shares hard-won lessons from years of leading software teams, drawing on a distinctive "human systems" lens to explain why so many engineering organisations struggle — not because of bad people, but because of broken systems, misaligned leadership, and invisible cultural forces.

    The conversation weaves together philosophy, practical management advice, and candid personal anecdotes, making it equally relevant for first-time engineering managers and seasoned CTOs. The central thread throughout is that software is fundamentally a human endeavour, and leaders who treat it like a purely technical one will keep running into the same problems.

    Chapters
    0:00 — Every Problem is a Systems Problem
    3:00 — Labelling vs. Diagnosing: The Human Systems Approach
    6:15 — Poor Performance Is a System Failure, Not a People Failure
    9:10 — AI, Flat Orgs, and the Pressure on Engineering Managers
    11:30 — Diagnosing a Broken Team: A Real-World Turnaround
    24:05 — People Are Not Interchangeable Components
    26:00 — Culture: What Happens When Nobody's Watching
    33:00 — The Power Gradient and Cross-Team Collaboration
    39:00 — The C-Suite Distance Problem
    42:00 — Building Culture in Remote and Distributed Teams
    46:00 — Software Engineering Is a Humanity

    Robert's Links:
    https://www.linkedin.com/in/robert-n-harris/coded2lead.com

    John's Links:
    John's LinkedIn: https://www.linkedin.com/in/johncrickett/
    John’s YouTube: https://www.youtube.com/@johncrickett
    John's Twitter: https://x.com/johncrickett
    John's Bluesky: https://bsky.app/profile/johncrickett.bsky.social

    Check out John's software engineering related newsletters: Coding Challenges: https://codingchallenges.substack.com/ which shares real-world project ideas that you can use to level up your coding skills.

    Developing Skills: https://read.developingskills.fyi/ covering everything from system design to soft skills, helping them progress their career from junior to staff+ or for those that want onto a management track.

    Takeaways
    People run on emotion and safety, not logic — lead them accordingly.
    When someone underperforms, look at the system before you look at the person.
    Labelling people as "difficult" or "lazy" is a way of avoiding the real problem.
    AI is accelerating code generation, but the human bottleneck downstream is getting worse, not better.
    The institutional memory inside a team is worth far more than anything in your wiki.
    Culture is what happens when nobody's watching — not what's written on the wall.
    If you send Slack messages at 10pm, your team will think there's no such thing as work-life balance.
    Only authorised people should authorise work — casual remarks from leaders land as commands.
    Co-location without connection isn't culture, it's a terrarium.
    Computers are a science, but software is a humanity.
  • Coding Chats

    The Death of Writing Code: OpenAI's Engineer on the Rise of Harness Engineering

    07/05/2026 | 50 mins.
    Coding Chats Episode 77 — Arnaud Fournier, Forward Deployed Engineer at OpenAI, talks to John Crickett about how AI is fundamentally reshaping software engineering. He explores how OpenAI's own engineers have largely moved away from writing code line-by-line, shifting instead to what he calls "harness engineering" — orchestrating agents, preparing context, and steering AI to do the heavy lifting.

    The conversation covers practical ground for engineers at every level: how to successfully adopt agentic coding in your workflow, best practices for integrating tools like Codex into enterprise environments, and what it's really like to work at the frontier of AI deployment across industries like semiconductors, life sciences, and finance.

    Chapters
    00:00 Understanding the Role of Forward Deployed Engineers
    03:21 The Integration Process: Challenges and Solutions
    06:25 Optimizing AI Solutions with Codex
    09:38 Leveraging Codex for Team Efficiency
    12:28 Best Practices for Using Codex in Engineering Workflows
    15:29 Setting Up for Success in Enterprise AI Projects
    18:26 Navigating Stakeholder Engagement and Requirements
    21:16 The Future of AI in Enterprise Solutions
    25:53 Building Proof of Concept Solutions
    28:33 Collaborative Development and Model Improvement
    30:45 The Rise of Codex and User Adoption
    33:36 Integrating AI into Software Development
    36:10 Standardization vs. Customization in AI Tools
    39:05 The Evolving Role of Forward-Deployed Engineers
    42:48 Understanding the FDE Role at OpenAI
    46:10 The Recruitment Process at OpenAI
    49:50 Exploring Related Content
    49:58 Outro Final Coding Chats.mp4

    Arnaud's Links
    https://www.linkedin.com/in/arnaudfrn/
    https://openai.com/index/introducing-openai-frontier/
    https://community.openai.com/t/introducing-the-new-codex-for-almost-everything/1379125
    https://openai.com/index/scaling-codex-to-enterprises-worldwide/

    John's Links:
    John's LinkedIn: https://www.linkedin.com/in/johncrickett/
    John’s YouTube: https://www.youtube.com/@johncrickett
    John's Twitter: https://x.com/johncrickett
    John's Bluesky: https://bsky.app/profile/johncrickett.bsky.social

    Check out John's software engineering related newsletters: Coding Challenges: https://codingchallenges.substack.com/ which shares real-world project ideas that you can use to level up your coding skills.

    Developing Skills: https://read.developingskills.fyi/ covering everything from system design to soft skills, helping them progress their career from junior to staff+ or for those that want onto a management track.
  • Coding Chats

    LLM as a Judge: Why Your AI Might Be Marking Its Own Homework

    30/04/2026 | 1h 7 mins.
    Coding Chats episode 76 - John talks to Laura Dietz - a computer science professor whose work focuses on whether AI evaluation metrics actually tell the truth. She's known for her critical take on "LLM as a judge" — not because she thinks it's useless, but because she wants numbers that mean something rather than numbers that just make a system look good.

    The conversation tackles some uncomfortable realities for software engineers: using an LLM to write code and another to review it is a circular trap, prompt engineering shouldn't be a computer scientist's day job, and every time you reject your code AI's output, you're quietly generating the training data that shapes its successor.

    Chapters
    00:00 Introduction to Laura Dietz and Her Journey
    03:12 Exploring LLMs as Judges
    06:16 Challenges in Evaluating Search Systems
    08:49 The Evolution of User Queries and Expectations
    11:46 The Role of LLMs in Information Retrieval
    14:44 Defining Quality in Search Results
    17:27 The Complexity of User Intent
    19:54 Human-AI Collaboration in Code Review
    22:53 The Future of LLMs in Software Development
    25:23 Balancing Human and AI Roles
    28:20 Innovative Approaches to AI Evaluation
    34:10 The Art of Assembling Ideas
    36:39 Balancing Cost and Quality in LLMs
    39:09 Evaluating LLM Performance
    43:50 The Future of LLMs and Training Data
    49:19 Exploring New Architectures in AI
    55:16 Understanding In-Context Learning
    01:00:45 The Role of AI in Creative Expression
    01:06:59 Exploring Related Content

    Laura's Links:
    https://www.cs.unh.edu/~dietz/https://
    www.linkedin.com/in/laura-dietz-47036516/
    John's Links:
    John's LinkedIn: https://www.linkedin.com/in/johncrickett/
    John’s YouTube: https://www.youtube.com/@johncrickett
    John's Twitter: https://x.com/johncrickett
    John's Bluesky: https://bsky.app/profile/johncrickett.bsky.social

    Check out John's software engineering related newsletters: Coding Challenges: https://codingchallenges.substack.com/ which shares real-world project ideas that you can use to level up your coding skills.

    Developing Skills: https://read.developingskills.fyi/ covering everything from system design to soft skills, helping them progress their career from junior to staff+ or for those that want onto a management track.

    Takeaways
    Using an LLM to both generate and evaluate outputs is circular — like a student grading their own homework.
    If your evaluation metric can go up without your system actually improving, it's not a real metric.
    A better human-in-the-loop isn't one that rubber-stamps AI suggestions — it's one that's guided to look in the right place.
    LLMs don't get bored, which makes them genuinely useful for code review — but that's not the same as making them accurate.
    "Faith-based engineering" — trusting AI output without validation — is a real and growing problem in software teams.
    Prompt engineering is a workaround, not a discipline; real engineers should be building systems, not crafting incantations.
    Every rejection you give your code AI is training signal — your frustration today is someone else's better tool tomorrow.
    The transformer attention mechanism is a weighted sum, and a sum isn't always the right operation — some problems need an AND, not an OR.
    AI tools are lowering the barrier to coding for people who were previously too intimidated to try, and that's worth celebrating.
    The same network effect that makes a platform valuable also makes monopoly in AI training data genuinely dangerous.
  • Coding Chats

    Let it crash! How Erlang and BEAM build bullet proof software

    23/04/2026 | 37 mins.
    Coding Chats episode 74 - Erik Stenman talks to John Crickett about the BEAM virtual machine — the runtime behind Erlang, Elixir, and Gleam. Built by Ericsson in the 1980s for telephone switches, it was designed for fault tolerance and concurrency from day one, yet never achieved mainstream popularity despite being technically superior to many alternatives.

    The discussion covers what makes BEAM unique: lightweight isolated processes, a "let it crash" fault philosophy, and powerful built-in introspection. Erik also shares practical lessons from production use and explains why newer languages like Elixir and Gleam are finally bringing BEAM the attention it deserves.

    Chapters
    00:00 Introduction to Beam and Erlang
    02:45 The Unique Features of Erlang and Beam
    05:17 Concurrency and Fault Tolerance in Beam
    07:34 Applications and Use Cases of Erlang
    10:00 Error Handling and Process Supervision
    12:49 Performance Considerations in Beam
    15:09 Learning and Adopting Erlang and Elixir
    17:28 The Future of Erlang, Elixir, and Gleam
    37:04 Exploring Related Content

    Erik's Links:
    https://happihacking.com/ https://happihacking.com/blog/
    https://github.com/happi/theBeamBook
    https://www.amazon.com/dp/9153142535https://www.elixirconf.eu/trainings/the-beam-for-developers/

    John's Links:
    John's LinkedIn: https://www.linkedin.com/in/johncrickett/
    John’s YouTube: https://www.youtube.com/@johncrickett
    John's Twitter: https://x.com/johncrickett
    John's Bluesky: https://bsky.app/profile/johncrickett.bsky.social

    Check out John's software engineering related newsletters: Coding Challenges: https://codingchallenges.substack.com/ which shares real-world project ideas that you can use to level up your coding skills.

    Developing Skills: https://read.developingskills.fyi/ covering everything from system design to soft skills, helping them progress their career from junior to staff+ or for those that want onto a management track.

    Takeaways
    BEAM was built for telephone switches in the 1980s — its reliability features translate surprisingly well to modern web and distributed systems.
    Erlang lost the popularity race to Java largely due to marketing, not technical merit.
    BEAM processes are extremely lightweight — hundreds of bytes, not kilobytes — allowing millions to run concurrently.
    "Let it crash" is a design philosophy, not laziness — isolating failures prevents one bad process from taking down the whole system.
    No shared memory between processes eliminates an entire class of concurrency bugs.
    Per-process garbage collection means no "stop the world" pauses like you get in Java.
    Hot code loading lets you upgrade a running system without downtime — but it requires careful thought about data structure changes.
    BEAM's built-in introspection lets you inspect a live system in real time, making debugging far faster.
    Elixir and Gleam are modernising the syntax and bringing new developers onto the BEAM platform.
    BEAM doesn't solve everything — good architecture still matters, but it gets you there faster than most alternatives.
  • Coding Chats

    AI writes it. You own it. Don't ship AI slop

    16/04/2026 | 56 mins.
    Coding Chats episode 74 - John Crickett talks to Nnenna Ndukwe, a developer advocate at Qodo, discussing how teams can maintain code quality in the age of AI coding tools. She argues that AI agents should be combined with traditional tools like linters and static analysis — not replace them — and that teams need to define and codify what "good code" looks like so that consistency can be enforced across the whole development lifecycle.

    A recurring theme is developer ownership: as AI writes more code, engineers must stay in the driver's seat, genuinely reviewing what gets shipped rather than blindly accepting it. The episode also touches on dogfooding, with both agreeing that using your own tools internally is a strong signal of a product worth trusting.

    Chapters
    00:00 Introduction to AI in Software Development
    03:24 Embedding Quality Gates in Development
    06:03 The Importance of Consistency in Code
    09:09 Ownership and Critical Thinking in Engineering
    12:00 Balancing Tool Freedom and Intellectual Property
    14:56 Navigating AI Tools and Workflows
    17:47 Managing Burnout in AI Development
    20:47 The Evolution of Coding and Instant Gratification
    23:47 Documenting Ideas and Project Management
    26:54 Using AI for Ideation and Collaboration
    31:38 The Joy of Learning Through AI
    34:11 Codo: Enhancing Code Quality and Governance
    37:22 Comparing Code Review Tools
    40:10 The Future of AI in Software Development
    50:51 The Importance of Dogfooding Products
    56:12 Exploring Related Content

    Nnenna's Links:
    https://nnennahacks.com
    https://linkedin.com/in/nnenna-ndukwe/
    https://x.com/nnennahacks

    John's Links:
    John's LinkedIn: https://www.linkedin.com/in/johncrickett/
    John’s YouTube: https://www.youtube.com/@johncrickett
    John's Twitter: https://x.com/johncrickett
    John's Bluesky: https://bsky.app/profile/johncrickett.bsky.social

    Check out John's software engineering related newsletters: Coding Challenges: https://codingchallenges.substack.com/ which shares real-world project ideas that you can use to level up your coding skills.

    Developing Skills: https://read.developingskills.fyi/ covering everything from system design to soft skills, helping them progress their career from junior to staff+ or for those that want onto a management track.

    Takeaways
    Combine AI coding tools with deterministic tools (linters, static analysis) — don't ditch one for the other.
    Define what "good code" looks like for your team before expecting AI agents to enforce it.
    Embed quality checks early and consistently across every stage of the dev lifecycle.
    Developers must stay in the driver's seat — ownership and understanding of AI-generated code is a key differentiator.
    Code consistency (naming conventions, style, structure) becomes even more valuable when LLMs are in the mix.
    Coding rules need to live in a centralised, accessible place so all agents can rely on them.
    Dogfooding your own tools internally is a non-negotiable sign of a trustworthy product.
More Technology podcasts
About Coding Chats
On Coding Chats, John Crickett interviews software engineers of all levels from junior to CTO. He encourages the guests to share the stories of the challenges they have faced in their role and the strategies and tactics they have used to overcome those challenges providing actionable insights other software engineers can use to accelerate their careers.
Podcast website

Listen to Coding Chats, Lex Fridman Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features