PodcastsTechnologyThe DevSecOps Talks Podcast

The DevSecOps Talks Podcast

Mattias Hemmingsson, Julien Bisconti and Andrey Devyatkin
The DevSecOps Talks Podcast
Latest episode

97 episodes

  • The DevSecOps Talks Podcast

    #97 - Shift Left, Get Hacked: Supply Chain Attacks Hit Devs

    15/04/2026 | 35 mins.
    March 2026 made supply chain attacks feel a lot less theoretical, but what made these incidents different? The hosts discuss compromised publishing credentials, automatic execution hooks like post-install scripts and Python `.pth` files, and how both humans and security tools caught the malicious releases. They also talk through concrete ways to make developer environments harder to abuse. 

    We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners.

    DevSecOps Talks podcast LinkedIn page

    DevSecOps Talks podcast website

    DevSecOps Talks podcast YouTube channel
  • The DevSecOps Talks Podcast

    #96 - Keeping Platforms Simple and Fast with Joachim Hill-Grannec

    01/04/2026 | 48 mins.
    This episode with Joachim Hill-Grannec asks: How do platforms bloat, and how do you keep them simple and fast with trunk-based dev and small batches? Which metrics prove it works—cycle time, uptime, or developer experience? Can security act as a partner that speeds delivery instead of a gate?

     

    We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners.

    DevSecOps Talks podcast LinkedIn page

    DevSecOps Talks podcast website

    DevSecOps Talks podcast YouTube channel

    Summary
    In this episode of DevSecOps Talks, Mattias speaks with Joachim Hill-Grannec, co-founder of Peltek, a boutique consulting firm specializing in high-availability, cloud-native infrastructure. Following up on a previous episode where Steve discussed cleaning up bloated platforms, Mattias and Joachim dig into why platforms get bloated in the first place and how platform teams should think when building from scratch. Their conversation spans cloud provider preferences, the primacy of cycle time, the danger of adding process in response to failure, and a strong argument for treating security and quality as enablers rather than gatekeepers.

    Key Topics
    Platform Teams Should Serve Delivery Teams
    Joachim frames the core question of platform engineering around who the platform is actually for. His answer is clear: the delivery teams are the client. Platform engineers should focus on making it easier for developers to ship products, not on making their own work more convenient.

    He connects this directly to platform bloat. In his experience, many platforms grow uncontrollably because platform engineers keep adding tools that help the platform team itself: "Look, I spent this week to make my job this much faster." But Joachim pushes back on this instinct — the platform team is an amplifier for the organization, and every addition should be evaluated by whether it helps a product get to production faster and gives developers better visibility into what they are working on.

    Choosing a Cloud Provider: Preferences vs. Reality
    The conversation briefly explores cloud provider choices. Joachim says GCP is his personal favorite from a developer perspective because of cleaner APIs and faster response times, though he acknowledges Google's tendency to discontinue services unexpectedly. He describes AWS as the market workhorse — mature, solid, and widely adopted, comparing it to "the Java of the land." Azure gets the coldest reception; both acknowledge it has improved over time, but Joachim says he still struggles whenever he is forced to use it.

    They observe that cloud choices are frequently made outside engineering. Finance teams, investors, and existing enterprise agreements often drive the decision more than technical fit. Joachim notes a common pairing: organizations using Google Workspace for productivity but AWS for cloud infrastructure, partly because the Entra ID (formerly Azure AD) integration with AWS Identity Center works more smoothly via SCIM than the equivalent Google Workspace setup, which requires a Lambda function to sync groups.

    Measuring Platform Success: Cycle Time Above All
    When Mattias asks how a team can tell whether a platform is actually successful, Joachim separates subjective and objective measures.

    On the subjective side, he points to developer happiness and developer experience (DX). Feedback from delivery teams matters, even if surveys are imperfect.

    On the objective side, his favorite metric is cycle time — specifically, the time from when code is ready to when it reaches production. He also mentions uptime and availability, but keeps returning to cycle time as the clearest indicator that a platform is helping teams deliver faster. This aligns with DORA research, which has consistently shown that deployment frequency and lead time for changes are strong predictors of overall software delivery performance.

    Start With a Highway to Production
    A major theme of the episode is that platforms should begin with the shortest possible route to production. Mattias calls this a "highway to production," and Joachim strongly agrees.

    For greenfield projects, Joachim favors extremely fast delivery at first — commit goes to production, commit goes to production — even with minimal process. As usage and risk increase, teams can gradually add automation, testing, and safeguards. The critical thing is to keep the flow and then ask "how do we make those steps faster?" as you add them, rather than letting each new step slow down the pipeline unchallenged.

    He also makes a strong case for tags and promotions over branch-based deployment, noting his instinctive reaction when someone asks "which branch are we deploying from?" is: "No branches — tags and promotions."

    The Trap of Slowing Down After Failure
    Joachim warns about a common and dangerous pattern: when a bug reaches production, the natural organizational reaction is not to fix the pipeline, but to add gates. A QA team does a full pass, a security audit is inserted, a manual review step appears. Each gate slows delivery, which leads to larger batches, which increases risk, which triggers even more controls.

    He sees this as a vicious cycle. Organizations that respond to incidents by slowing delivery actually get worse security, worse quality, and worse throughput over time. He references a study — likely the research behind the book Accelerate by Nicole Forsgren, Jez Humble, and Gene Kim — showing that faster delivery correlates with better security and quality outcomes. The organizations adding Engineering Review Boards (ERBs) and Architecture Review Boards (ARBs) in the name of safety often do not measure the actual impact, so they never see that the controls are making things worse.

    Mattias connects this to AI-assisted development, where developers can now produce changes faster than ever. If the pipeline cannot keep up, the pile of unreleased changes grows, making each release riskier.

    Getting Buy-In: Start With Small Experiments
    Joachim does not recommend that a slow, process-heavy organization throw everything out overnight. Instead, he suggests starting with small experiments. Code promotions are a good entry point: teams can start producing artifacts more rapidly without changing how those artifacts are deployed. Once that works, the conversation shifts to delivering those artifacts faster.

    He finds starting on the artifact pipeline side produces quicker wins and more organizational buy-in than starting with the platform deployment side, which tends to be more intertwined and higher-risk to change.

    Guiding Principles Over a Rigid Golden Path
    Mattias questions the idea of a single "golden path," saying the term implies one rigid way of working. Joachim leans toward guiding principles instead.

    His strongest principle is simplicity — specifically, simplicity to understand, not necessarily simplicity to create. He references Rich Hickey's influential talk Simple Made Easy (from Strange Loop 2011), which distinguishes between things that are simple (not intertwined) and things that are easy (familiar or close at hand). Creating simple systems is hard work, but the payoff is systems that are easy to reason about, easy to change, and easy to secure.

    His second guiding principle is replaceability. When evaluating any tool in the platform, he asks: "How hard would it be to yank this out and replace it?" If swapping a component would be extremely difficult, that is a smell — it means the system has become too intertwined. Even with a tool as established as Argo CD, his team thinks about what it would look like to switch it out.

    Tooling Choices and Platform Foundations
    Joachim outlines the patterns his team typically uses when building platforms, organized into two paths:

    Delivery pipeline (artifact creation): - Trunk-based development over GitFlow - Release tags and promotions rather than branch-based deployment - Containerization early in the pipeline - Release Please for automated release management and changelogs - Renovate for dependency updates (used for production environment promotions from Helm charts and container images)

    Platform side (environment management): - Kubernetes-heavy, typically EKS on AWS - Karpenter for node scaling - AWS Load Balancer Controller only as a backing service for a separate ingress controller (not using ALB Ingress directly, due to its rough edges) - Argo CD for GitOps synchronization and deployment - Argo Image Updater for lower environments to pull latest images automatically - Helm for packaging, despite its learning curve

    He notes that NGINX Ingress Controller has been deprecated, so teams need to evaluate alternatives for their ingress layer.

    Developers Should Not Be Fully Shielded From Operations
    One of the more nuanced parts of the conversation is how much operational responsibility developers should have. Joachim rejects both extremes. He does not think every developer needs to know everything about infrastructure, but he has seen too many cases where developers completely isolated from runtime concerns make poor decisions — missing simple code changes that would make a system dramatically easier to deploy and operate.

    He advocates for transparency and collaboration. Platform repos should be open for anyone on the dev team to submit pull requests. When the platform team makes a change, they should pull in developers to work alongside them. This way, the delivery team gradually builds a deeper understanding of how the whole system works.

    Joachim loves the open-source maintainer model applied inside organizations: platform teams are maintainers of their areas, but anyone in the organization should be able to introduce change. He warns against building custom CLIs or heavy abstractions that create dependencies — if a developer wants to do something the CLI does not support, the platform team becomes a bottleneck.

    Mattias adds that opening up the platform to contributions also exposes assumptions. What feels easy to the person who built it may not be easy at all; it is just familiar. Outside contributors reveal where the system is actually hard to understand.

    Designers, Not Artists: Detaching Ego From Code
    Joachim shares an analogy he prefers over the common "developers as artists" framing. He sees developers more like designers than artists, because an artist's work is tied to their identity — they want it to endure. A designer, by contrast, creates something to serve a purpose and expects it to be replaced when something better comes along.

    He applies this to platforms and infrastructure: "I want my thing to get wiped out. If I build something, I want it to get removed eventually and have something better replace it." Organizations where ego is tied to specific systems or tools tend to resist change, which leads to the kind of dysfunction that keeps platforms bloated and brittle.

    Complexity Is the Enemy of Security
    Mattias raises the difficulty of maintaining complex security setups over time, especially when the original experts leave. Joachim responds firmly: complexity is anti-security.

    If people cannot comprehend a system, they cannot secure it well. He acknowledges that some problems are genuinely hard, but argues that much of the complexity engineers create is unnecessary — driven by ego rather than need. "The really smart people are the ones that create simple things," he says, wishing the industry would redirect its narrative from admiring complicated systems to admiring simple ones.

    Security and QA as Internal Consulting, Not Gatekeeping
    Joachim draws a parallel between security and QA. He dislikes calling a team "the quality team," preferring "verification" — they are one component of quality, not the entirety of it. Similarly, security is not one team's responsibility; it spans product design, development practices, tooling, and operations.

    His ideal model is for security and QA teams to operate as internal consultants whose goal is to reduce risk and improve the overall system — not to catch every possible issue at any cost. The framing matters: if a security team's mandate is simply "block all security issues," the logical conclusion is to stop shipping or delete the product entirely. That may be technically secure, but it is useless.

    He frames security as risk management: "Security is a risk management process, not just security for the sake of security. You're managing the risk to the business." The goal should be to deliver faster and more securely — an "and," not an "or."

    Mattias recalls a PCI DSS consultant joking over drinks that a system being down is perfectly compliant — no one can steal card numbers if the system is unavailable. The joke lands because it exposes exactly the broken incentive Joachim describes.

    Business Value as the Unifying Frame
    The episode closes by tying everything back to business outcomes. Joachim argues that speed and security are not opposites; both contribute to business value. Fast delivery creates value directly, while security reduces business risk — and risk management is itself a business operation.

    He explains why focusing on the highest-impact business bottleneck first builds trust. When you hit the big items first, you earn credibility, and subsequent changes become easier to justify. For example, one of his clients has a security group that is the slowest part of their organization. Speeding up that security process would have a massive impact on business delivery — more than optimizing the artifact pipeline.

    Mattias reflects that he used to see platform work as separate from business concerns — "I don't care about the business, I'm here to build a platform for developers." Looking back, he would reframe that: using business impact as the measure of platform success does not mean abandoning the focus on developers, it means having a clearer way to prioritize and demonstrate value.

    Highlights

    Joachim on platform bloat: "Your job is not to make your job faster and easier — you're an amplifier to the organization."

    Joachim on his favorite metric: "Cycle time is my favorite metric. I love cycle time metrics."

    Joachim on deployment strategy: "No branches, no branches — tags and promotions."

    Mattias on platform design: He calls the ideal early setup a "highway to production."

    Joachim on simplicity vs. ease: He references Rich Hickey's Simple Made Easy talk — "It's very hard to create simple systems that are easy to reason about. And it's very easy to create systems that are very hard to reason about."

    Joachim on replaceability: "If swapping a tool out would be extremely hard, that's a pretty big smell."

    Joachim on complexity and security: "If it's complicated, you just can't keep all the context together. Simple systems are much easier to be secure."

    Joachim on engineering ego: "I don't particularly like the aspect of [developers as] artists... I want my thing to get wiped out. I want it to get removed eventually and have something better replace it." He prefers the analogy of designers over artists, because artists tie their identity to their creations.

    Joachim on security as a blocker: "If their goal is we are going to block every security issue, the best way to do that is delete your product."

    Spicy cloud takes: Joachim calls GCP his favorite cloud for developers, compares AWS to "the Java of the land," and says he still struggles every time he is forced to use Azure.

    PCI DSS dark humor: Mattias recalls a consultant joking that a downed system is perfectly compliant — you cannot steal card numbers from a system that is not running.

    Joachim on the slow-down trap: Organizations add ERBs, ARBs, and manual security gates after incidents, but "the faster you can deliver, you actually get better security, better quality, and better throughput — and the more you slow it down, you go the opposite."

    Resources

    Simple Made Easy by Rich Hickey (InfoQ) — The influential 2011 talk Joachim references on distinguishing simplicity from ease in system design.

    DORA Metrics: The Four Keys — The research framework behind cycle time, deployment frequency, and the finding that speed and stability are not tradeoffs.

    Trunk Based Development — A comprehensive guide to the branching strategy Joachim recommends over GitFlow.

    Argo CD — Declarative GitOps for Kubernetes — The GitOps tool Joachim's team uses for cluster synchronization and deployment.

    Release Please (GitHub) — Google's tool for automated release management based on conventional commits, used by Joachim's team for tag-based promotions.

    Karpenter — Kubernetes Node Autoscaler — The node autoscaler Joachim's team uses with EKS for fast, flexible scaling.

    Renovate — Automated Dependency Updates — The dependency management bot Joachim uses for both build dependencies and production environment promotions.
  • The DevSecOps Talks Podcast

    #95 - From Platform Theater to Golden Guardrails with Steve Wade

    23/03/2026 | 45 mins.
    Is your Kubernetes stack bloated, slow, and hard to explain? Steve Wade shares simple checks—the hiring treadmill, onboarding time, and the acronym test—to spot platform theater fast. What would a 30-day deletion sprint cut, save, and secure? 

    We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners.

    DevSecOps Talks podcast LinkedIn page

    DevSecOps Talks podcast website

    DevSecOps Talks podcast YouTube channel

    Summary
    In this episode of DevSecOps Talks, Mattias and Paulina speak with Steve Wade, founder of Platform Fix, about why so many Kubernetes and platform initiatives become overcomplicated, expensive, and painful for developers. Steve has helped simplify over 50 cloud-native platforms and estimates he has removed around $100 million in complexity waste. The conversation covers how to spot a bloated platform, why "free" tools are never really free, how to systematically delete what you don't need, and why the best platform engineering is often about subtraction rather than addition.

    Key Topics
    Steve's Background: From Complexity Creator to Strategic Deleter
    Steve introduces himself as the founder of Platform Fix — the person companies call when their Kubernetes migration is 18 months in, millions over budget, and their best engineers are leaving. He has done this over 50 times, and he is candid about why it matters so much to him: he used to be this problem.

    Years ago, Steve led a migration that was supposed to take six months. Eighteen months later, the team had 70 microservices, three service meshes (they kept starting new ones without finishing the old), and monitoring tools that needed their own monitoring. Two senior engineers quit. The VP of Engineering gave Steve 90 days or the team would be replaced.

    Those 90 days changed everything. The team deleted roughly 50 of the 70 services, ripped out all the service meshes, and cut deployment time from three weeks of chaos to three days, consistently. Six months later, one of the engineers who had left came back. That experience became the foundation for Platform Fix.

    As Steve puts it: "While everyone's collecting cloud native tools like Pokemon cards, I'm trying to help teams figure out which ones to throw away and which ones to keep."

    Why Platform Complexity Happens
    Steve explains that organizations fall into a complexity trap by continuously adding tools without questioning whether they are actually needed. He describes walking into companies where the platform team spends 65–70% of their time explaining their own platform to the people using it. His verdict: "That's not a team, that's a help desk with infrastructure access."

    People inside the complexity normalize it. They cannot see the problem because they have been living in it for months or years. Steve identifies several drivers: conference-fueled recency bias (someone sees a shiny tool at KubeCon and adopts it without evaluating the need), resume-driven architecture (engineers choosing tools to pad their CVs), and a culture where everyone is trained to add but nobody asks "what if we remove something instead?"

    He illustrates the resume-driven pattern with a story from a 200-person fintech. A senior hire — "Mark" — proposed a full stack: Kubernetes, Istio, Argo, Crossplane, Backstage, Vault, Prometheus, Loki, Tempo, and more. The CTO approved it because "Spotify uses it, so it must be best practice." Eighteen months and $2.3 million later, six engineers were needed just to keep it running, developers waited weeks to deploy, and Mark left — with "led Kubernetes migration" on his CV. When Steve asked what Istio was actually solving, nobody could answer. It was costing around $250,000 to run, for a problem that could have been fixed with network policies.

    He also highlights a telling sign: he asked three people in the same company how many Kubernetes clusters they needed and got three completely different answers. "That's not a technical disagreement. That's a sign that nobody's aligned on what the platform is actually for."

    The AI Layer: Tool Fatigue Gets Worse
    Paulina observes that the same tool-sprawl pattern is now being repeated with AI tooling — an additional layer of fatigue on top of what already exists in the cloud-native space. Steve agrees and adds three dimensions to the AI complexity problem: choosing which LLM to use, learning how to write effective prompts, and figuring out who is accountable when AI-written code does not work as expected. Mattias notes that AI also enables anyone to build custom tools for their specific needs, which further expands the toolbox and potential for sprawl.

    How Leaders Can Spot a Bloated Platform
    One of the most practical segments is Steve's framework for helping leaders who are not hands-on with engineering identify platform bloat. He gives them three things to watch for:

    The hiring treadmill: headcount keeps growing but shipping speed stays flat, because all new capacity is absorbed by maintenance.

    The onboarding test: ask the newest developer how long it took from their first day to their first production deployment. If it is more than a week, "it's a swamp." Steve's benchmark: can a developer who has been there two weeks deploy without asking anyone? If yes, you have a platform. If no, "you have platform theater."

    The acronym test: ask the platform team to explain any tool of their choosing without using a single acronym. If they cannot, it is likely resume-driven architecture rather than genuine problem-solving.

    The Sagrada Familia Problem
    Steve uses a memorable analogy: many platforms are like the Sagrada Familia in Barcelona — they look incredibly impressive and intricate, but they are never actually finished. The question leaders should ask is: what does an MVP platform look like, what tools does it need, and how do we start delivering business value to the developers who use it? Because, as Steve says, "if we're not building any business value, we're just messing around."

    Who the Platform Is Really For
    Mattias asks the fundamental question: who is the platform actually for? Steve's answer is direct — the platform's customers are the developers deploying workloads to it. A platform without applications running on it is useless.

    He distinguishes three stages:
    - Vanilla Kubernetes: the out-of-the-box cluster
    - Platform Kubernetes: the foundational workloads the platform needs to function (secret management, observability, perhaps a service mesh)
    - The actual platform: only real once applications are being deployed and business value is delivered

    The hosts discuss how some teams build platforms for themselves rather than for application developers or the business, which is a fast track to unnecessary complexity.

    Kubernetes: Standard Tool or Premature Choice?
    The episode explores when Kubernetes is the right answer and when it is overkill. Steve emphasizes that he loves Kubernetes — he has contributed to the Flux project and other CNCF projects — but only when it is earned. He gives an example of a startup with three microservices, ten users, and five engineers that chose Kubernetes because "Google uses it" and the CTO went to KubeCon. Six months later, they had infrastructure that could handle ten million users while serving about 97.

    "Google needs Kubernetes, but your Series B startup needs to ship features."

    Steve also shares a recent on-site engagement where he ran the unit economics on day two: the proposed architecture needed four times the CPU and double the RAM for identical features. One spreadsheet saved the company from a migration that would have destroyed the business model. "That's the question nobody asks before a Kubernetes migration — does the maths actually work?"

    Mattias pushes back slightly, noting that a small Kubernetes cluster can still provide real benefits if the team already has the knowledge and tooling. Paulina adds an important caveat: even if a consultant can deploy and maintain Kubernetes, the question is whether the customer's own team can realistically support it afterward. The entry skill set for Kubernetes is significantly higher than, say, managed Docker or ECS.

    Managed Services and "Boring Is Beautiful"
    Steve's recommendation for many teams is straightforward: managed platforms, managed databases, CI/CD that just works, deploy on push, and go home at 5 p.m. "Boring is beautiful, especially when you call me at 3 a.m."

    He illustrates this with a company that spent 18 months and roughly $850,000 in engineering time building a custom deployment system using well-known CNCF tools. The result was about 80–90% as good as GitHub Actions. The migration to GitHub Actions cost around $30,000, and the ongoing maintenance cost was zero.

    Paulina adds that managed services are not completely zero maintenance either, but the operational burden is orders of magnitude less than self-managed infrastructure, and the cloud provider takes on a share of the responsibility.

    The New Tool Tax: Why "Free" Tools Are Never Free
    A central theme is that open-source tools carry hidden costs far exceeding their license fee. Steve introduces the new tool tax framework with four components, using Vault (at a $40,000 license) as an example:

    Learning tax (~$45,000): three engineers, two weeks each for training, documentation, and mistakes

    Integration tax (~$20,000): CI/CD pipelines, Kubernetes operators, secret migration, monitoring of Vault itself

    Operational tax (~$50,000/year): on-call, upgrades, tickets, patching

    Opportunity tax (~$80,000): while engineers work on Vault, they are not building things that could save hundreds of hours per month

    Total year-one cost: roughly $243,000 — a 6x multiplier over the $40,000 budget. And as Steve points out, most teams never present this full picture to leadership.

    Mattias extends the point to tool documentation complexity, noting that anyone who has worked with Envoy's configuration knows how complicated it can be. Steve adds that Envoy is written in C — "How many C developers do you have in your organization? Probably zero." — yet teams adopt it because it offers 15 to 20 features that may or may not be useful.

    This is the same total cost of ownership concept the industry has used for on-premises hardware, but applied to the seemingly "free" cloud-native landscape. The tools are free to install, but they are not free to manage and maintain.

    Why Service Meshes Are Often the First to Go
    When Mattias asks which tool type Steve most often deletes, the answer is service meshes. Steve does not name a specific product but says six or seven times out of ten, service meshes exist because someone thought they were cool, not because the team genuinely needed mutual TLS, rate limiting, or canary deploys at the mesh level.

    Mattias agrees: in his experience, he has never seen an environment that truly required a service mesh. The demos at KubeCon are always compelling, but the implementation reality is different. Steve adds a self-deprecating note — this was him in the past, running three service meshes simultaneously because none of them worked perfectly and he kept starting new ones in test mode.

    A Framework for Deleting Tools
    Steve outlines three frameworks he uses to systematically simplify platforms.

    The Simplicity Test is a diagnostic that scores platform complexity across ten dimensions on a scale of 0 to 50: tool sprawl, deployment complexity, cognitive load, operational burden, documentation debt, knowledge silos, incident frequency, time to production, self-service capability, and team satisfaction. A score of 0–15 is sustainable, 16–25 is manageable, 26–35 is a warning, and 36–50 is crisis. Over 400 engineers have taken it; the average score is around 34. Companies that call Steve typically score 38 to 45.

    The Four Buckets categorize every tool: Essential (keep it), Redundant (duplicates something else — delete immediately), Over-engineered (solves a real problem but is too complicated — simplify it), or Premature (future-scale you don't have yet — delete for now).

    From one engagement with 47 tools: 12 were essential, 19 redundant, 11 over-engineered, and 5 premature — meaning 35 were deletable.

    He then prioritizes by impact versus risk, tackling high-impact, low-risk items first. For example, a large customer had Datadog, Prometheus, and New Relic running simultaneously with no clear rationale. Deleting New Relic took three hours, saved $30,000, and nobody noticed. Seventeen abandoned databases with zero connections in 30 days were deprecated by email, then deleted — zero responses, zero impact.

    The security angle matters here too: one of those abandoned databases was an unpatched attack surface sitting in production with no one monitoring it. Paulina adds a related example — her team once found a Flyway instance that had gone unpatched for seven or eight years because each team assumed the other was maintaining it. As she puts it, lack of ownership creates the same kind of hidden risk.

    The 30-Day Cleanup Sprint
    Steve structures platform simplification as a focused 30-day effort:

    Week 1: Audit. Discover what is actually running — not what the team thinks is running, because those are usually different.

    Week 2: Categorize. Apply the four buckets and prioritize quick wins. During this phase, Steve tells the team: "For 30 days, you're not building anything — you're only deleting things."

    Week 3: Delete. Remove redundant tools and simplify over-engineered ones.

    Week 4: Systemize. Document everything — how decisions were made, why tools were removed, and how new tool decisions should be evaluated going forward. Build governance dashboards to prevent complexity from creeping back.

    He illustrates this with a company whose VP of Engineering — "Sarah" — told him: "This isn't a technical problem anymore. This is a people problem." Two senior engineers had quit on the same day with the same exit interview: "I'm tired of fighting the platform." One said he had not had dinner with his kids on a weekend in six months. The team's morale score was 3.2 out of 10.

    The critical insight: the team already knew what was wrong. They had known for months. But nobody had been given permission to delete anything. "That's not a cultural problem and it's not a knowledge problem. It's a permissions problem. And I gave them the permission."

    Results: complexity score dropped from 42 to 26, monthly costs fell from $150,000 to $80,000 (roughly $840,000 in annual savings), and deployment time improved from two weeks to one day.

    But Steve emphasizes the human outcome. A developer told him afterward: "Steve, I went home at 5 p.m. yesterday. It's the first time in eight months. And my daughter said, 'Daddy, you're home.'" That, Steve says, is what this work is really about.

    Golden Paths, Guardrails, and Developer Experience
    Mattias says he wants the platform he builds to compete with the easiest external options — Vercel, Netlify, and the like. If developers would rather go elsewhere, the internal platform has failed.

    Steve agrees and describes a pattern he sees constantly: developers do not complain when the platform is painful — they route around it. He gives an example from a fintech where a lead developer ("James") needed a test environment for a Friday customer demo. The official process required a JIRA ticket, a two-day wait, YAML files, and a pipeline. Instead, James spun up a Render instance on his personal credit card: 12 minutes, deployed, did the demo, got the deal. Nobody knew for three months, until finance found the charges.

    Steve's view: that is not shadow IT or irresponsibility — it is a rational response to poor platform usability. "The fastest path to business value went around the platform, not through it."

    The solution is what Steve calls the golden path — or, as he reframes it using a bowling alley analogy, golden guardrails. Like the bumpers that keep the ball heading toward the pins regardless of how it is thrown, the guardrails keep developers on a safe path without dictating exactly how they get there. The goal is hitting the pins — delivering business value.

    Mattias extends the guardrails concept to security: the easiest path should also be the most secure and compliant one. If security is harder than the workaround, the workaround wins every time. He aims to make the platform so seamless that developers do not have to think separately about security — it is built into the default experience.

    Measuring Outcomes, Not Features
    Steve argues that platform teams should measure developer outcomes, not platform features: time to first deploy, time to fix a broken deployment, overall developer satisfaction, and how secure and compliant the default deployment paths are.

    He recommends monthly platform retrospectives where developers can openly share feedback. In these sessions, Steve goes around the room and insists that each person share their own experience rather than echoing the previous speaker. This builds a backlog of improvements directly tied to real developer pain.

    Paulina agrees that feedback is essential but notes a practical challenge: in many organizations, only a handful of more active developers provide feedback, while the majority say they do not have time and just want to write code. Collecting representative feedback requires deliberate effort.

    She also raises the business and management perspective. In her consulting experience, she has seen assessments include a third dimension beyond the platform team and developers: business leadership, who focus on compliance, security, and cost. Sometimes the platform enables fast development, but management processes still block frequent deployment to production — a mindset gap, not a technical one. Steve agrees and points to value stream mapping as a technique for surfacing these bottlenecks with data.

    Translating Engineering Work Into Business Value
    Steve makes a forceful case that engineering leaders must express technical work in business terms. "The uncomfortable truth is that engineering is a cost center. We exist to support profit centers. The moment we forget that, we optimize for architectural elegance instead of business outcomes — and we lose the room."

    He illustrates this with a story: a CFO asked seven engineering leaders one question — "How long to rebuild production if we lost everything tomorrow?" Five seconds of silence. Ninety-four years of combined experience, and nobody could answer. "That's where engineering careers die."

    The translation matters at every level. Saying "we deleted a Jenkins server" means nothing to a CFO. Saying "we removed $40,000 in annual costs and cut deployment failures by 60%" gets attention.

    Steve challenges listeners to take their last three technical achievements and rewrite each one with a currency figure, a percentage, and a timeframe. "If you can't, you're speaking engineering, not business."

    Closing Advice: Start Deleting This Week
    Steve's parting advice is concrete: pick one tool you suspect nobody is using, check the logs, and if nothing has happened in 30 days, deprecate it. In 60 days, delete it. He also offers the simplicity test for free — it takes eight minutes, produces a 0-to-50 score with specific recommendations, and is available by reaching out to him directly.

    "Your platform's biggest risk isn't technical — it's political. Platforms die when the CFO asks you a question you can't answer, when your best engineer leaves, or when the team builds for their CV instead of the business."

    Highlights

    Steve Wade: "While everyone's collecting cloud native tools like Pokemon cards, I'm trying to help teams figure out which ones to throw away and which ones to keep."

    Steve Wade: "That's not a team, that's a help desk with infrastructure access." — on platform teams spending most of their time explaining their own platform

    Steve Wade: "If the answer is yes, it means you have a platform. And if it's no, it means you have platform theater."

    Steve Wade: "So many platforms are like the Sagrada Familia — they look super impressive, but they're never finished yet."

    Steve Wade: "Boring is beautiful, especially when you call me at 3 a.m."

    Steve Wade: "Google needs Kubernetes, but your Series B startup needs to ship features."

    Steve Wade: "One spreadsheet saved them from a migration that would have just simply destroyed the business model."

    Steve Wade: "They knew what was wrong. They'd known for months. But nobody was given the permission to delete anything."

    Steve Wade: "They're not really free. They're just free to install." — on open-source CNCF tools

    Steve Wade: "Your platform's power doesn't come from what you add. It comes from what you have the courage to delete."

    Mattias: Argues that an internal platform should compete with the easiest external alternatives — if developers would rather use Vercel, the platform has failed. Also extends the guardrails concept to security: the easiest path should always be the most secure path.

    Paulina: Highlights the ownership gap — tools can go unpatched for years when each team assumes another team maintains them. Also raises the management dimension: sometimes it is not the platform that is slow, but organizational processes that block deployment.

    Resources

    The Pragmatic CNCF Manifesto — Steve Wade's guide to cloud-native sanity, with six principles, pragmatic stack recommendations, and anti-patterns to avoid, drawn from 50+ enterprise migrations.

    The Deletion Digest — Steve's weekly newsletter for platform leaders, delivering one actionable lesson on deleting platform complexity every Saturday morning.

    CNCF Cloud Native Landscape — the full interactive map of cloud-native tools that Steve references when talking about needing to zoom out on your browser just to see everything.

    How Unnecessary Complexity Gave the Service Mesh a Bad Name (InfoQ) — a detailed analysis of why service meshes became the poster child for over-engineering in cloud-native environments.

    What Are Golden Paths? A Guide to Streamlining Developer Workflows — a practical guide to designing the "path of least resistance" that makes the right way the easy way.

    Value Stream Mapping for Software Delivery (DORA) — the DORA team's guide to the value stream mapping technique Steve mentions for surfacing bottlenecks between engineering and business.

    Inside Platform Engineering with Steve Wade (Octopus Deploy) — a longer conversation with Steve about platform engineering philosophy and practical approaches.

    Steve Wade on LinkedIn — where Steve regularly posts about platform simplification and can be reached directly.
  • The DevSecOps Talks Podcast

    #94 - Small Tasks, Big Wins: The AI Dev Loop at System Initiative

    11/03/2026 | 52 mins.
    We bring Paul Stack back to cover the parts we skipped last time. What changed when the models got better and we moved from one-shot Gen AI to agentic, human-in-the-loop work? How do plan mode and tight prompts stop AI from going rogue? Want to hear how six branches, git worktrees, and a TypeScript CLI came together? 

    We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners.

    DevSecOps Talks podcast LinkedIn page

    DevSecOps Talks podcast website

    DevSecOps Talks podcast YouTube channel

    Summary
    In this episode, Mattias, Andre, and Paulina welcome back returning guest Paul from System Initiative to continue a conversation that started in the previous episode about their project Swamp. The discussion digs into how AI-assisted software development has changed over the past year, and why the real shift is not "AI writes code" but humans orchestrating multiple specialized agents with strong guardrails. Paul walks through the practical workflows, multi-layered testing, architecture-first thinking, cost discipline, and security practices his team has adopted — while the hosts push on how this applies across enterprise environments, mentoring newcomers, and the uncomfortable question of who is responsible when AI-built software fails.

    Key Topics
    The industry crossroads: layoffs, fear, and a new reality
    Before diving into technical specifics, Paul acknowledges that the industry is at "a real crazy crossroads." He references Block (formerly Square) cutting roughly 40% of their workforce, citing uncertainty about what AI means for their teams. He wants to be transparent that System Initiative also shrank — but clarifies the company did not cut people because of AI. The decision to reduce headcount came before they even knew what they were going to build next, let alone how they would build it. AI entered the picture only after they started prototyping the next version of their product.

    Block's February 2026 layoffs, announced by CEO Jack Dorsey, eliminated over 4,000 positions. The move was framed as an AI-driven restructuring, making it one of the most visible examples of AI anxiety playing out in real corporate decisions.

    From GenAI hype to agentic collaboration
    Paul explains that AI coding quality shifted significantly around October–November of the previous year. Before that, results were inconsistent — sometimes impressive, often garbage. Then the models improved dramatically in both reasoning and code generation.

    But the bigger breakthrough, in his view, was not the models themselves. It was the industry's shift from "Gen AI" — one-shot prompting where you hand over a spec and accept whatever comes back — to agentic AI, where the model acts more like a pair programmer. In that setup, the human stays in the loop, challenges the plan, adds constraints, and steers the result toward something that fits the codebase.

    He gives a concrete early example: System Initiative had a CLI written in Deno (a TypeScript runtime). Because the models were well-trained on TypeScript libraries and the Deno ecosystem, they started producing decent code. Not beautiful, not perfectly architected — but functional. When Paul began feeding the agent patterns, conventions, and existing code to follow, the output became coherent with their codebase.

    This led to a workflow where Paul would open six Claude Code sessions at once in separate Git worktrees — isolated copies of the repository on different branches — each building a small feature in parallel, feeding them bug reports and data, and continuously interacting with the results rather than one-shotting them.

    Git worktrees let you check out multiple branches of the same repository simultaneously in separate directories. Each worktree is independent, so you can work on several features at once and merge them back via pull requests.

    He later expanded this by running longer tasks on a Mac Mini accessible via Tailscale (a mesh VPN), while handling shorter tasks on his laptop — effectively distributing AI workloads across machines.

    Why architecture matters more than ever
    One of Paul's strongest themes is that AI shifts engineering attention away from syntax and back toward architecture. He argues that AI can generate plenty of code, but without design principles and boundaries it will produce spaghetti on top of existing spaghetti.

    He introduces the idea of "the first thousand lines" — an anecdote he read recently claiming that the first thousand lines of code an agent helps write determine its path forward. If those lines are well-structured and follow clear design principles, the agent will build coherently on top of them. If they are messy and unprincipled, everything after will compound the mess.

    Paul breaks software development into three layers:

    Architecture — design patterns like DDD (Domain-Driven Design), CQRS (Command Query Responsibility Segregation)

    Patterns — principles like DRY (Don't Repeat Yourself), YAGNI (You Aren't Gonna Need It), KISS (Keep It Simple)

    Taste — naming conventions, module layout, project structure, Terraform module organization

    He argues the industry spent the last decade obsessing over "taste" while often mocking "ivory tower architects" — the people who designed systems but didn't write code. In an AI-driven world, those architectural concerns become critical again because the agent needs clear boundaries, domain structure, and intent to produce coherent output.

    Paulina agrees and observes that this trend may also blur traditional specialization lines, pushing engineers toward becoming more general "software people" rather than narrowly front-end, back-end, or DevOps specialists.

    Encoding design docs, rules, and constraints into the repo
    Paul describes how his team makes architecture actionable for AI by encoding system knowledge directly into the repository. Their approach has several layers:

    Design documents — Detailed docs covering the model layer (the actual objects, their purposes, how they relate), workflow construction (how models connect and pass data), and expression language behavior. These live in a /design folder in the open-source repo and describe the intent of every part of the system.

    Architectural rules — The agent is explicitly told to follow Domain-Driven Design: proper separation between domains, infrastructure, repositories, and output layers. The DDD skill is loaded so the agent understands and maintains bounded contexts.

    Code standards — TypeScript strict mode, no any types, named exports, passing lint and format checks. License compliance is also enforced: because the project is AGPL v3, the agent cannot pull in dependencies with incompatible licenses.

    Skills — A newer mechanism for lazy-loading contextual information into the AI agent. Rather than stuffing everything into one enormous prompt, skills are loaded on demand when the agent encounters a specific type of task. This keeps context windows lean and focused.

    AGPL v3 (GNU Affero General Public License) is a copyleft license that requires anyone who runs modified software over a network to make the source code available. This creates strict constraints on what dependencies can be used.

    Multi-agent development: the full chain
    A major part of the discussion centers on how Paul's team works with multiple specialized AI agents rather than a single all-knowing assistant. The chain looks like this:

    Issue triage agent — When a user opens a GitHub issue, an agent evaluates whether it is a legitimate feature request or bug report. The agent's summary is posted back to the issue immediately, creating context for later stages.

    Planning agent — If the issue is legitimate, the system enters plan mode. A specification is generated and posted for the user to review. Users can push back ("that's not how I think it should work"), and the plan is revised until everyone agrees.

    Implementation agent — The code is written based on the approved plan, with all the design docs, architectural rules, and skills loaded as context.

    Happy-path reviewer — A separate agent reviews the code against standards, checking that it loads correctly and appears to function.

    Adversarial reviewer — Added just days before the recording, this agent is told: "You are a grumpy DevOps engineer and I want you to pull this code apart." It looks for security injection points, failure modes, and anything the happy-path reviewer might miss.

    Both review agents write their findings as comments on the pull request, creating a visible audit trail. The PR only merges when both agents approve. If the adversarial agent flags a security vulnerability, the implementation goes back for changes.

    Paul says this "Jekyll and Hyde" review setup caught a path traversal bug in their CLI during its first week. While the CLI runs locally and the risk was limited, it proved the value of adversarial review.

    Path traversal is a vulnerability where an attacker can access files outside the intended directory by manipulating file paths (e.g., using ../ sequences). Even in CLI tools, this can expose sensitive files on a user's machine.

    Mattias compares the overall process to a modernized CI/CD pipeline — the same stages exist (commit, test, review, promote, release), but AI replaces some of the manual implementation steps while humans stay focused on architecture, review, and acceptance.

    Why external pull requests are disabled
    One of the more provocative decisions Paul describes: the open-source Swamp project does not accept external pull requests. GitHub recently added a feature to disable PR creation from non-collaborators entirely, and the team turned it on immediately.

    The reasoning is supply chain control. Because the project's code is 100% AI-generated within a tightly controlled context — design docs, architectural rules, skills, adversarial review — they want to ensure that all code entering the system passes through the same pipeline. External PRs would bypass that chain of custody.

    Contributors are instead directed to open issues. The team will work through the design collaboratively, plan it together, and then have their agents implement it. Paul frames this not as rejecting collaboration but as controlling the process: "We love contributions, but in the AI world, we cannot control where that code is from or what that code is doing."

    Self-reporting bugs: AI filing its own issues
    The team built a skill into Swamp itself so that when the tool encounters a bug during use, it can check out the version of the source code the binary was built against, analyze the problem, and open a GitHub issue automatically with detailed context.

    This creates high-quality bug reports that already contain the information needed to reason about a fix. When the implementation agent later picks up that issue, it has precise context — where the bug is, what triggered it, and what the expected behavior should be. Paul says the quality of issues generated this way is significantly higher than typical user-filed bugs.

    Testing: the favorite part
    Although the conversation starts with code generation, Paul says testing is actually his favorite part of the workflow. The team runs multiple layers:

    Product-level tests:
    - Unit and integration tests — standard code-level verification
    - Architectural fitness tests — contract tests, property tests, and DDD boundary checks that verify the domain doesn't leak and the agent followed its instructions

    Architectural fitness tests are automated checks that verify a system's structure conforms to its intended architecture. In DDD, this means ensuring bounded contexts don't leak dependencies across domain boundaries.

    User-level tests (separate repo):
    - User flow tests — written from the user's perspective against compiled binaries, not source code. These live in a different repository specifically so they are not influenced by how the code is written. They test scenarios like: create a repository, extend the system, create a workflow, run a model, handle wrong inputs.

    Adversarial tests (multiple tiers):
    1. Security boundary tests — path traversal, environment variable exposure, supply chain attack vectors. Paul references the recent Trivy incident, where a bot stole an API key and used it to delete all of Trivy's GitHub releases and publish a poisoned VS Code extension.
    2. State corruption — what happens when someone tampers with the state layer
    3. Concurrency — multiple writes, lock failures, race conditions
    4. Resource exhaustion — handling pathological inputs like a 100MB stdout message injected into a workflow

    Only after all these layers pass does a build get promoted from nightly to stable. Paul can download and manually test any nightly build that maps back to a specific commit.

    Paulina points out that if AI is a force multiplier, there is now even less excuse not to write tests. Paul agrees: "We were scraping the barrel before at coming up with reasons why there shouldn't be any tests. Now that's eliminated."

    Plan mode as a safety rail
    Paul repeatedly emphasizes "plan mode," particularly in Claude Code. Before the agent changes anything, it produces a detailed plan describing what it intends to do and why, and waits for human approval.

    The hosts immediately draw a parallel to terraform plan — the value is not just automation, but the chance to inspect intended changes before applying them. Paul says this was one of the biggest improvements in AI-assisted development because it reduces horror-story scenarios where an agent goes off and deletes a database or rewrites an application.

    He notes that other tools are starting to adopt plan mode because it produces better results across the board. But he also warns that plan mode only helps if people actually read the plan — just like Terraform, the safeguard depends on human discipline. "If there's a big line in the middle that says 'I'm going to delete a database' and you haven't read it — it's the same thing."

    Practical lessons for getting good results
    Paul shares several tactical lessons:

    Don't give mega-prompts — "Don't write 'rebuild me Slack' and expect an AI agent to do it. You'd be shocked at the amount of people who try this."

    Always provide design docs — Specifications produce dramatically better output than vague instructions.

    Don't skip straight to code — Start with design and planning, not "code me this method."

    Small tasks — Don't attempt project-wide rewrites with a single agent. Context loss is the new "restart your machine."

    Never trust the first plan — Paul routinely asks the agent: "Are you sure this is the right plan? Explain it to me like a five-year-old. Does this fulfill what the user needed?"

    Compare implementation to plan — After implementation, ask the agent to map what it actually did against the original plan and explain any deviations.

    He also notes that the generated TypeScript is not always how a human would write it — but that matters less if the result is well-tested, secure, and respects domain boundaries. "The actual syntax of the code itself can change 12 times a day. It doesn't really matter as long as it adheres to the product."

    Human oversight at every stage
    Despite all the automation, Paul is adamant that humans remain involved at every stage. Plans are reviewed, implementations are questioned, pull request comments are inspected, binaries are tested before reaching stable release. He describes it as "continually interacting with Claude Code" rather than just letting things happen.

    When Paulina pushes on whether a human still checks things before production, Paul makes clear: yes, always. The release pipeline goes from commit to nightly to manual verification to stable promotion. "I will always download something before it goes to stable."

    The context-switching tax
    Paul acknowledges that running multiple agents in parallel is not for everyone. Context switching has always been expensive for engineers, and commanding multiple agents simultaneously is a new form of it. His advice: if you work best focusing on a single task, don't force the multi-agent style. "It'll be such a context switching killer and it'll cause you to lose focus."

    The key shift is that instead of writing code, you are "commanding architecture and commanding design." But that still requires focus and judgment.

    AI as a force multiplier, not a replacement
    Paulina captures the dynamic bluntly: "It's a multiplier. If there is a good thing, you'll get a lot of good thing. If it's a shit, you're going to get a lot of shit."

    Paul argues that experienced software and operations people are still essential because they understand architecture, security, constraints, and tradeoffs. AI amplifies whatever is already there — good engineering or bad engineering alike.

    He believes engineers who learn to use these tools well become "even more important to your company than you already are." But he also acknowledges that some people will not want to work this way, and that friction between AI-forward and AI-resistant teams is already happening in organizations.

    The challenge for juniors and newcomers
    Paulina raises this personally — she was recently asked to mentor someone entering IT and struggled with how to approach it. She doesn't have a formal IT education (she has an engineering background) and learned on the go. The skills she built through manual work — understanding when code needs refactoring as scale changes, knowing how to structure projects at different sizes — are hard to teach when AI handles so much of the writing.

    Paul agrees this is an open question and says the industry is still figuring out the patterns. He believes teaching principles, architecture, and core engineering fundamentals becomes even more important, because tool-specific syntax is increasingly handled by AI. "Do you need to know how to write a Terraform module? Do you need to know how to write a Pulumi provider?" — these are becoming less essential as individual skills, while understanding how systems fit together matters more.

    He frames this as an opportunity: "We are now in control of helping shape how this moves forward in the industry." As innovators and early adopters, current practitioners can set the patterns. If they don't, someone else will.

    Security, responsibility, and the risk of low-code AI
    Paulina raises a concrete example from Poland: someone built an app using AI to upload receipts to a centralized accounting system, released it publicly, and exposed all their customers' data.

    This leads to a deeper question from Mattias about responsibility: if someone with no engineering background builds an insecure app using an AI tool, who is accountable? The user? The platform? The model provider? The episode doesn't settle this, but Paul argues it reinforces why skilled engineers remain essential. The AI doesn't know the security boundary unless someone explicitly teaches it — "it probably wasn't fed that information that it had to think about the security context."

    He expects more specialized skills and agents focused on security, accessibility, and compliance to emerge — calling out the example of loading a security skill and an accessibility skill when you know an app will be public-facing. But he says the ecosystem is not fully there yet.

    Cost discipline: structure beats vibe coding
    Paul addresses economics directly. His five-person team at System Initiative all use Claude Max Pro at $200 per person per month. They do not exceed that cost for the full AI workflow — code generation, reviews, planning, and adversarial testing.

    In contrast, he has seen other organizations spend $10,000–$12,000 per month per developer on AI tokens because they let tools roam with huge context windows and vague instructions. His conclusion: tightly scoped tasks are not just better for quality — they are far cheaper.

    This maps directly to classic engineering wisdom. Tightly defined stories and tasks were always more efficient to push through a system than "go rebuild this thing and I'll see you in six months." The same principle applies to AI agents.

    How to introduce AI in cautious organizations
    For teams in companies that ban or restrict AI, Paul suggests a pragmatic entry point: use agents to analyze, not to write code.

    He describes a conversation with someone in London who asked how to get started. Paul's advice: if you already know roughly where a bug lives, ask the agent to analyze the same bug report. If it identifies the same area and the same root cause, you have evidence that the tool can accelerate diagnosis. Show your CTO: "I'm diagnosing bugs 50% faster with this agent. It's not writing code — it's helping me understand where the issue is."

    Similar analysis-first use cases work for accessibility reviews, security scans, or code quality assessments. The point is to build trust before expanding scope. Paul notes this approach works faster in the private sector than the public sector, where technology adoption has always been slower.

    The pace of change is accelerating
    Paul believes the conversation has shifted dramatically in the past six months — from AI horror stories and commiserating over drinks to genuine success stories and conferences forming around agentic engineering practices. He points to two upcoming events:

    An Agentic conference in Stockholm organized by Robert Westin, who Paulina mentions meeting just the day before

    Agentic Conf in Hamburg at the end of March

    His prediction: the pace is not linear. "We're honestly exponential at this moment in time." He sidesteps the ethics of AI companies (referencing tensions between Anthropic and OpenAI) to focus on the practical reality that models, reasoning, and tooling are all improving at a compounding rate.

    Highlights

    Paul on the industry mood: "We're at this real crazy crossroads in the industry."

    Paul on model quality: Around late last year, "the models got really good, like really, really good. Really, really, really incredible."

    Paul on running six agents at once: "I had my terminal open. I had six Claude Codes going at once, building like six different small features at a time."

    Paulina's blunt summary: "It's a multiplier. If there is a good thing, you'll get a lot of good thing. If it's a shit, you're going to get a lot of shit."

    Paul's spicy take on architecture: The industry spent years mocking "ivory tower architects," but now AI is pushing engineers back into "the architecture lab."

    Paul on plan mode: It is the AI equivalent of terraform plan — useful, but only if people actually read it.

    Paul's unusual open-source policy: External pull requests are disabled entirely so the team can control the supply chain.

    Most memorable workflow detail: Paul's adversarial reviewer is instructed, "You are a grumpy DevOps engineer and I want you to pull this code apart."

    Practical win: That grumpy reviewer caught a path traversal bug in its first week.

    Paul on testing excuses: "We were scraping the barrel before at coming up with reasons why there shouldn't be any tests. Now that's eliminated."

    Paul's warning against mega-prompts: Asking an agent to "rebuild me Slack" with no context is a great way to conclude, incorrectly, that "AI is garbage."

    Paul on context loss: "Compact your conversation" is the new "restart your machine."

    Paul on cost: Five people, $200/month each, covering the full AI workflow — versus orgs spending $10K+ per developer per month on unfocused vibe coding.

    Paul on adoption strategy: Start by letting AI analyze bugs and accessibility issues — build trust before asking it to write code.

    Resources

    Claude Code Documentation — Common Workflows — Official Anthropic docs covering plan mode, extended thinking, and agentic workflows in Claude Code.

    Anthropic Skills Repository — Public repository of modular Markdown-based skill packages that extend Claude's capabilities for specialized tasks like security review or accessibility checking.

    System Initiative on GitHub — The open-source repository for System Initiative's infrastructure automation platform, including the design documents Paul references.

    Git Worktree Documentation — Official docs for git worktree, the feature that enables Paul's multi-branch parallel development workflow.

    Trivy Supply Chain Attack Analysis (Socket) — Detailed writeup of the incident Paul references where a bot stole credentials, deleted GitHub releases, and published a poisoned VS Code extension.

    Building Evolutionary Architectures — Fitness Functions — The book and concepts behind architectural fitness tests that Paul's team uses to verify DDD boundaries and prevent domain leakage.

    GitHub: New Repository Settings for Pull Request Access — The GitHub feature Paul mentions for disabling external pull requests at the repository level.
  • The DevSecOps Talks Podcast

    #93 - The DevSecOps Perspective: Key Takeaways From Re:Invent 2025

    05/03/2026 | 27 mins.
    Andrey and Mattias share a fast re:Invent roundup focused on AWS security. What do VPC Encryption Controls, post-quantum TLS, and org-level S3 block public access change for you? Which features should you switch on now, like ECR image signing, JWT checks at ALB, and air-gapped AWS Backup? Want simple wins you can use today?

     

    We are always happy to answer any questions, hear suggestions for new episodes, or hear from you, our listeners.

    DevSecOps Talks podcast LinkedIn page

    DevSecOps Talks podcast website

    DevSecOps Talks podcast YouTube channel

    Summary
    In this episode, Andrey and Mattias deliver a security-heavy recap of AWS re:Invent 2025 announcements, while noting that Paulina is absent and wishing her a speedy recovery. Out of the 500+ releases surrounding re:Invent, they narrow the list down to roughly 20 features that security-conscious teams can act on today — covering encryption, access control, detection, backups, container security, and organization-wide guardrails. Along the way, Andrey reveals a new AI-powered product called Boris that watches the AWS release firehose so you don't have to.

    Key Topics
    AWS re:Invent Through a Security Lens
    The hosts frame the episode as the DevSecOps Talks version of a re:Invent recap, complementing a FivexL webinar held the previous month. Despite the podcast's name covering development, security, and operations, the selected announcements lean heavily toward security. Andrey is upfront about it: if security is your thing, stay tuned; otherwise, manage your expectations.

    At the FivexL webinar, attendees were asked to prioritize areas of interest across compute, security, and networking. AI dominated the conversation, and people were also curious about Amazon S3 Vectors — a new S3 storage class purpose-built for vector embeddings used in RAG (Retrieval-Augmented Generation) architectures that power LLM applications. It is cost-efficient but lacks hybrid search at this stage.

    VPC Encryption and Post-Quantum Readiness
    One of the first and most praised announcements is VPC Encryption Control for Amazon VPC, a pre-re:Invent release that lets teams audit and enforce encryption in transit within and across VPCs. The hosts highlight how painful it used to be to verify internal traffic encryption — typically requiring traffic mirroring, spinning up instances, and inspecting packets with tools like Wireshark. This feature offers two modes: monitor mode to audit encryption status via VPC flow logs, and enforce mode to block unencrypted resources from attaching to the VPC.

    Mattias adds that compliance expectations are expanding. It used to be enough to encrypt traffic over public endpoints, but the bar is moving toward encryption everywhere, including inside the VPC. The hosts also call out a common pattern: offloading SSL at the load balancer and leaving traffic to targets unencrypted. VPC encryption control helps catch exactly this kind of blind spot.

    The discussion then shifts to post-quantum cryptography (PQC) support rolling out across AWS services including S3, ALB, NLB, AWS Private CA, KMS, ACM, and Secrets Manager. AWS now supports ML-KEM (Module Lattice-Based Key Encapsulation Mechanism), a NIST-standardized post-quantum algorithm, along with ML-DSA (Module Lattice-Based Digital Signature Algorithm) for Private CA certificates.

    The rationale: state-level actors are already recording encrypted traffic today in a "harvest now, decrypt later" strategy, betting that future quantum computers will crack current encryption. Andrey notes that operational quantum computing feels closer than ever, making it worthwhile to enable post-quantum protections now — especially for sensitive data traversing public networks.

    S3 Security Controls and Access Management
    Several S3-related updates stand out. Attribute-Based Access Control (ABAC) for S3 allows access decisions based on resource tags rather than only enumerating specific actions in policies. This is a powerful way to scope permissions — for example, granting access to all buckets tagged with a specific project — though it must be enabled on a per-bucket basis, which the hosts note is a drawback even if necessary to avoid breaking existing security models.

    The bigger crowd-pleaser is S3 Block Public Access at the organization level. Previously available at the bucket and account level, this control can now be applied across an entire AWS Organization. The hosts call it well overdue and present it as the ultimate "turn it on and forget it" control: in 2026, there is no good reason to have a public S3 bucket.

    Container Image Signing
    Amazon ECR Managed Image Signing is a welcome addition. ECR now provides a managed service for signing container images, leveraging AWS Signer for key management and certificate lifecycle. Once configured with a signing rule, ECR automatically signs images as they are pushed. This eliminates the operational overhead of setting up and maintaining container image signing infrastructure — previously a significant barrier for teams wanting to verify image provenance in their supply chains.

    Backups, Air-Gapping, and Ransomware Resilience
    AWS Backup gets significant attention. The hosts discuss air-gapped AWS Backup Vault support as a primary backup target, positioning it as especially relevant for teams where ransomware is on the threat list. These logically air-gapped vaults live in an Amazon-owned account and are locked by default with a compliance vault lock to ensure immutability.

    The strong recommendation: enable AWS Backup for any important data, and keep backups isolated in a separate account from your workloads. If an attacker compromises your production account, they should not be able to reach your recovery copies. Related updates include KMS customer-managed key support for air-gapped vaults for better encryption flexibility, and GuardDuty Malware Protection for AWS Backup, which can scan backup artifacts for malware before restoration.

    Data Protection in Databases
    Dynamic data masking in Aurora PostgreSQL draws praise from both hosts. Using the new pg_columnmask extension, teams can configure column-level masking policies so that queries return masked data instead of actual values — for example, replacing credit card numbers with wildcards. The data in the database remains unmodified; masking happens at query time based on user roles.

    Mattias compares it to capabilities already present in databases like Snowflake and highlights how useful it is when sharing data with external partners or other teams. When the idea of using masked production data for testing comes up, the hosts gently push back — don't do that — but both agree that masking at the database layer is a strong control because it reduces the risk of accidental data exposure through APIs or front-end applications.

    Identity, IAM, and Federation Improvements
    The episode covers several IAM-related features. AWS IAM Outbound Identity Federation allows federating AWS identities to external services via JWT, effectively letting you use AWS identity as a platform for authenticating to third-party services — similar to how you connect GitHub or other services to AWS today, but in the other direction.

    The AWS Login CLI command provides short-lived credentials for IAM users who don't have AWS IAM Identity Center (SSO) configured. The hosts see it as a better alternative than storing static IAM credentials locally, but also question whether teams should still be relying on IAM users at all — their recommendation is to set up IAM Identity Center and move on.

    The AWS Source VPC ARN condition key gets particular enthusiasm. It allows IAM policies to check which VPC a request originated from, enabling conditions like "allow this action only if the request comes from this VPC." For teams doing attribute-based access control in IAM, this is a significant addition.

    AWS Secrets Manager Managed External Secrets is another useful feature that removes a common operational burden. Previously, rotating third-party SaaS credentials required writing and maintaining custom Lambda functions. Managed external secrets provides built-in rotation for partner integrations — Salesforce, BigID, and Snowflake at launch — with no Lambda functions needed.

    Better Security at the Network and Service Layer
    JWT verification in AWS Application Load Balancer simplifies machine-to-machine and service-to-service authentication. Teams previously had to roll their own Lambda-based JWT verification; now it is supported out of the box. The recommendation is straightforward: drop the Lambda and use the built-in capability.

    AWS Network Firewall Proxy is in public preview. While the hosts have not explored it deeply, their read is that it could help with more advanced network inspection scenarios — not just outgoing internet traffic through NAT gateways, but potentially also traffic heading toward internal corporate data centers.

    Developer-Oriented: REST API Streaming
    Although the episode is mainly security-focused, the hosts include REST API streaming in Amazon API Gateway as a nod to developers. This enables progressive response payload streaming, which is especially relevant for LLM use cases where streaming tokens to clients is the expected interaction pattern. Mattias notes that applications are moving beyond small JSON payloads — streaming is becoming table stakes as data volumes grow.

    Centralized Observability and Detection
    CloudWatch unified management for operational, security, and compliance data promises cross-account visibility from a single pane of glass, without requiring custom log aggregation pipelines built from Lambdas and glue code. The hosts are optimistic but immediately flag the cost: CloudWatch data ingest pricing can escalate quickly when dealing with high-volume sources like access logs. Deep pockets may be required.

    Detection is a recurring theme throughout the episode. The hosts discuss CloudTrail Insights for data events (useful if you are already logging data-plane events — another deep-pockets feature), extended threat detection for EC2 and ECS in GuardDuty using AI-powered analysis to correlate security signals across network activity, runtime behavior, and API calls, and the public preview of AWS Security Agent for automated security investigation.

    On GuardDuty specifically, the recommendation is clear: if you don't have it enabled, go enable it — it gives you a good baseline that works out of the box across your services with minimal setup. You can always graduate to more sophisticated tooling later, but GuardDuty is the gap-stopper you start with.

    Mattias drives the broader point home: incidents are inevitable, and what you can control is how fast you detect and respond. AWS is clearly investing heavily in the detection side, and teams should enable these capabilities as fast as possible.

    Control Tower, Organizations, and Guardrails at Scale
    Several updates make governance easier to adopt at scale: - Dedicated controls for AWS Control Tower without requiring a full Control Tower deployment — you can now use Control Tower guardrails à la carte. - Automatic enrollment in Control Tower — a feature the hosts feel should have existed already. - Required tags in Organizations stack policies — enforcing tagging standards at the organization level. - Amazon Inspector organization-wide management — centralized vulnerability scanning across all accounts. - Billing transfer for AWS Organizations — useful for AWS resellers managing multiple organizations. - Delete protection for CloudWatch Log Groups — a small but important safeguard.

    Mattias says plainly: everyone should use Control Tower.

    MCP Servers and AWS's Evolving AI Approach
    The conversation shifts to the public preview of AWS MCP (Model Context Protocol) servers. Unlike traditional locally-hosted MCP servers that proxy LLM requests to API calls, AWS is taking a different approach with remote, fully managed MCP servers hosted on AWS infrastructure. These allow AI agents and AI-native IDEs to interact with AWS services over HTTPS without running anything locally.

    AWS launched four managed MCP servers — AWS, EKS, ECS, and SageMaker — that consolidate capabilities like AWS documentation access, API execution across 15,000+ AWS APIs, and pre-built agent workflows. However, the IAM model is still being worked out: you currently need separate permissions to call the MCP server and to perform the underlying AWS actions it invokes. The hosts treat this as interesting but still evolving.

    Boris: AI for AWS Change Awareness
    Toward the end of the episode, Andrey reveals a personal project: Boris (getboris.ai), an AI-powered DevOps teammate he has been building. Boris connects to the systems an engineering team already uses and provides evidence-based answers and operational automation.

    The specific feature Andrey has been working on takes the AWS RSS feed — where new announcements land daily — and cross-references it against what a customer actually has running in their AWS Organization. Instead of manually sifting through hundreds of releases, Boris sends a digest highlighting only the announcements relevant to your environment and explaining how you would benefit.

    Mattias immediately connects this to the same problem in security: teams are overwhelmed by the constant flow of feature updates and vulnerability news. Having an AI that filters and contextualizes that information is, in his words, "brilliant."

    Andrey also announces that Boris has been accepted into the Tehnopol AI Accelerator in Tallinn, Estonia — a program run by the Tehnopol Science and Business Park that supports early-stage AI startups — selected from more than 100 companies.

    Highlights

    Setting expectations: "The selection of announcements smells more like security only. If security is your thing, stay tuned in. If it's not really, well, it's still interesting, but I'm just trying to manage your possible disappointment."

    On VPC encryption control: The hosts describe how proving internal encryption used to require traffic mirroring, Wireshark, and significant pain — this feature makes it a configuration toggle.

    On public S3 buckets: "In 2026 there is no good reason to have a public S3 bucket. Just turn it on and forget about it."

    On production data for testing: When someone floats using masked production data for testing — "Maybe don't do that."

    On detection over prevention: "You cannot really prevent something from happening in your environment. What you can control is how you react when it's going to happen. Detection is really where I put effort."

    On Boris: When Andrey describes how Boris watches the AWS release feed and tells you which announcements actually matter for your environment, Mattias's reaction: "This is brilliant."

    On getting started with AWS security: "If you are a startup building on AWS and compliance is important, it's quite easy to get it working. All the building blocks and tools are available for you to do the right things."

    Resources

    Introducing VPC Encryption Controls — AWS blog post explaining monitor and enforce modes for VPC encryption in transit.

    AWS Post-Quantum Cryptography — AWS overview of post-quantum cryptography migration, including ML-KEM support across S3, ALB, NLB, KMS, and Private CA.

    S3 Block Public Access Organization-Level Enforcement — Announcement for enforcing S3 public access blocks across an entire AWS Organization.

    Amazon ECR Managed Container Image Signing — Guide to setting up managed image signing with ECR and AWS Signer.

    GuardDuty Extended Threat Detection for EC2 and ECS — How GuardDuty uses AI/ML to correlate security signals and detect multi-stage attacks on compute workloads.

    Dynamic Data Masking for Aurora PostgreSQL — How to configure column-level data masking with the pg_columnmask extension.

    Understanding IAM for Managed AWS MCP Servers — AWS Security Blog post explaining the IAM permission model for remote MCP servers.

    B.O.R.I.S — Your AI DevOps Teammate — The AI-powered product discussed in the episode that tracks AWS announcements and matches them to your environment.

More Technology podcasts

About The DevSecOps Talks Podcast

This is the show by and for DevSecOps practitioners who are trying to survive information overload, get through marketing nonsense, do the right technology bets, help their organizations to deliver value, and last but not the least to have some fun. Tune in for talks about technology, ways of working, and news from DevSecOps. This show is not sponsored by any technology vendor and trying to be as unbiased as possible. We talk like no one is listening! For good or bad :) For more info, show notes, and discussion of past and upcoming episodes visit devsecops.fm
Podcast website

Listen to The DevSecOps Talks Podcast, Darknet Diaries and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features