The hidden costs of pre-computing data | Chalk's Elliot Marx
Is your engineering team wasting budget and sacrificing latency by pre-computing data that most users never see? Chalk co-founder Elliot Marx joins Andrew Zigler to explain why the future of AI relies on real-time pipelines rather than traditional storage. They dive into solving compute challenges for major fintechs, the value of incrementalism, Elliot’s thoughts on and why strong fundamental problem-solving skills still beat specific language expertise in the age of AI assistants.Join our AI Productivity roundtable: 2026 Benchmarks Insights*This episode was recorded live at the Engineering Leadership Conference.Follow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Elliot Marx: LinkedIn Chalk: Website | Twitter/X | CareersOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
--------
40:49
--------
40:49
Are developers happy yet? Unpacking the 2025 Developer Survey | Stack Overflow’s Erin Yepis
After hitting a low point last year, developer job satisfaction is officially on the rise. Erin Yepis returns to the show to unpack the 2025 Stack Overflow Developer Survey, analyzing how autonomy and compensation are driving this recovery. We also cover the happiness gap between senior and junior engineers, the surprising drop in trust for AI tools, and why vibe coding is failing to catch on with professional engineers.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Read the full report: 2025 Stack Overflow Developer SurveyStack Overflow Blog: Read Erin’s analysis and moreErin Yepis: Connect on LinkedInOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
--------
59:58
--------
59:58
From Kubernetes to AI maximalism | Stacklok's Craig McLuckie
When you co-create Kubernetes, you earn the right to have strong opinions on the next platform shift. This week, Ben sits down with Craig McLuckie, Co-founder & CEO of Stacklok, who is advocating for a shift in leadership mindset. He argues we need to move from asking if we can use AI to demanding to know why we can’t. Listen to hear why he believes an "AI maximalist" philosophy is the only way to survive the next cycle.LinearB: Measure the impact of GitHub Copilot and CursorFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Connect with Craig McLuckie: LinkedInCheck out Stacklok: Stacklok WebsiteOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
--------
55:29
--------
55:29
Speed is the moat | AMD’s Anush Elangovan
In the race to define the future of AI, what's the one advantage that truly lasts? It's not proprietary tech, argues Anush Elangovan, VP of AI Software at AMD, but the sustainable speed of innovation. He explains why AMD is rejecting the "walled garden" model for its open source ROCm stack, betting that an open community flywheel is the key to victory. Listen to understand how this open strategy is designed to out-innovate closed systems by empowering developers to solve everything from frontier-model challenges to the mundane, everyday problems that define the "last mile" of AI.LinearB: Your AI productivity journey starts hereFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Follow Anush LinkedIn | XAMD ROCm Software: github.com/ROCmAMD Developer CloudLearn more at: amd.comOFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
--------
52:21
--------
52:21
How spec-driven development is changing the rules | AWS’ Amit Patel
What is "spec-driven development," and why is this structured approach the key to unlocking complex AI projects? We're joined by Amit Patel, Director of Software Development for Kiro at AWS, to explore this methodology. He explains why "vibe coding" in a chat window fails on multi-day initiatives: the AI (and the developer) loses context. Kiro solves this by turning requirements and design into a persistent, structured spec that acts as the agent's long-term memory, enabling it to maintain context and build sophisticated applications.Amit shares the inside story of how his team at AWS built Kiro from scratch in under a year. He reveals their virtuous feedback loop with internal developers testing nightly builds and providing real-time feedback. This rapid iteration, which included six full revs of the spec experience, was so successful that the Kiro team famously "used the tool to build the tool," turning a multi-week feature into a two-day task. LinearB: Your AI productivity journey starts hereFollow the show:Subscribe to our Substack Follow us on LinkedInSubscribe to our YouTube ChannelLeave us a ReviewFollow the hosts:Follow AndrewFollow BenFollow DanFollow today's guest(s):Learn more and try Kiro: kiro.devJoin the Kiro Community: Kiro Discord Channel OFFERS Start Free Trial: Get started with LinearB's AI productivity platform for free. Book a Demo: Learn how you can ship faster, improve DevEx, and lead with confidence in the AI era. LEARN ABOUT LINEARB AI Code Reviews: Automate reviews to catch bugs, security risks, and performance issues before they hit production. AI & Productivity Insights: Go beyond DORA with AI-powered recommendations and dashboards to measure and improve performance. AI-Powered Workflow Automations: Use AI-generated PR descriptions, smart routing, and other automations to reduce developer toil. MCP Server: Interact with your engineering data using natural language to build custom reports and get answers on the fly.
Dev Interrupted is the go-to podcast for software engineering leadership. Each week, hosts Andrew Zigler, Ben Lloyd Pearson, and Dan Lines sit down with industry experts to explore the strategies, struggles, and stories behind high-performing software teams. Paired with weekly industry news coverage, the conversations dive deep into the real challenges that define excellence in modern tech.