PodcastsTechnologyResilient Cyber

Resilient Cyber

Chris Hughes
Resilient Cyber
Latest episode

206 episodes

  • Resilient Cyber

    Why AI Security Feels So Fragile

    01/05/2026 | 23 mins.
    AI security feels fragile right now — and in this episode, Ron Bennatan, VP of Strategy, AI and Database Security at Varonis and founder of Guardium, JSonar, and AllTrue.ai, explains exactly why.
    Ron unpacks what "fragile" actually means in the context of AI: it's a black box that requires careful handling, is sensitive to pressure, and is being outpaced by change that isn't linear or polynomial — it's exponential. What took 30 years of AI development previously has been eclipsed by the last three months alone.
    Drawing on 30 years in data security, Ron walks through how his journey from Guardium (structured data) to Varonis (historically unstructured data) represents a reunion that was always inevitable — because the policies and security motions were always the same, even when the industry split the two apart. Now, with AI agents becoming the dominant access pattern in the enterprise — potentially replacing 99% of traditional human-driven data access — the data layer is emerging as the most durable signal in AI security.
    The conversation covers why the AllTrue.ai thesis — that consumability and bridging the governance/security divide are more important than the tools themselves — translated naturally into the Varonis platform. Ron also breaks down why least privilege is fundamentally harder with agents (the permissioning model can't be deterministic when the decision-making isn't), why agents being unaccountable — no salary, no fear of being fired — makes detective controls less effective, and why the industry must accelerate toward preventive controls and intent analysis operating at machine speed.
    Key topics covered:
    Why AI security is fragile: the black box problem and exponential rate of change
    How Varonis unifies structured and unstructured data security for the agentic era
    Lessons from AllTrue.ai on consumability, and collapsing AI governance and security
    Why 99% of enterprise data access will soon flow through AI agents
    Intent analysis and chain-of-thought as the next frontier of data security
    Least privilege vs. least autonomy — and why the permissioning model must evolve
    Why agents' lack of accountability breaks the detect-and-alert model
    The shift from monitoring to prevention and assurance at the data layer
  • Resilient Cyber

    You Can't Trust What You Can't Verify — The Case for AI Model Identity

    28/04/2026 | 1 mins.
    Most organizations deploying AI today cannot answer a deceptively simple question. Which model is actually running in their environment?
    It is not a hypothetical concern. Model substitution, supply chain compromise, adversarial fine-tuning, and jurisdictional compliance gaps are all live risk vectors — and the industry has largely been relying on contractual guarantees from AI vendors rather than technical controls to address them.
    That gap is exactly what Project VAIL was built to close.
    In this episode I sat down with Manish Shah, Co-founder and CEO of Project VAIL (Verifiable Artificial Intelligence Layer). Manish is a repeat founder with 20+ years of company building experience, including as co-founder of LiveRamp, and he is now bringing that background to one of the most consequential unsolved problems in AI security, provably knowing and verifying which model is executing in your environment at runtime.
    VAIL’s approach combines two core technologies. Behavioral fingerprinting creates a unique, verifiable identity for AI models based on how they actually behave during inference, without relying on access to model weights or architecture. ZkTorch, developed in collaboration with researchers at UIUC, brings zero-knowledge proofs to large generative AI models for the first time at practical scale, enabling cryptographic verification of model computations without exposing sensitive model internals.
    We covered a lot of ground in this conversation, including:
    Why behavioral fingerprinting is a fundamentally different and more resilient approach to model identification 
    How model identity becomes a critical security primitive as agentic AI deployments expand 
    Detecting prohibited and derivative models, including open-source models derived from Chinese-origin foundations like DeepSeek and Qwen 
    Where frameworks like NIST AI RMF and the EU AI Act fall short on model verification requirements 
    How verified model fingerprints fit into zero-trust architectures for AI systems and agentic workflows 
    What standardization for verifiable AI needs to look like and which bodies should be driving it
    Model verification is not a niche research problem. It is becoming a foundational requirement for AI governance, compliance, and security in regulated industries and high-stakes deployments alike. 
    This episode gives you both the technical grounding and the strategic context to understand why.
  • Resilient Cyber

    Securing the Vibe: Tanya Janca on AI-Generated Code, Mythos, and the New AppSec Reality

    27/04/2026 | 38 mins.
    A new episode of the Resilient Cyber Show just dropped, and this one is a conversation I’ve been looking forward to for a long time.
    I sat down with Tanya Janca, better known to most of the AppSec world as SheHacksPurple. Tanya is the best-selling author of Alice and Bob Learn Application Security and Alice and Bob Learn Secure Coding, an OWASP Lifetime Distinguished Member, CEO of She Hacks Purple Consulting, and one of the most recognized voices in application security and developer education on the planet.
    The timing of this conversation is hard to overstate. The OWASP Top 10 2025 was announced at the Global AppSec Conference last year, with two new categories, Software Supply Chain Failures and Mishandling of Exceptional Conditions, and SSRF folded into Broken Access Control. Recently, Anthropic released the Claude Mythos Preview system card, documenting a model that has already found thousands of high-severity zero-day vulnerabilities autonomously, including bugs in every major operating system and web browser, and a 27-year-old vulnerability in OpenBSD.
    In other words, AppSec is at a hinge moment, and Tanya is exactly the right person to think out loud with about it.
    Here’s what we get into:
    What the OWASP Top 10 2025 got right, what it missed, and how teams should actually use it
    AI-generated code, “vibe coding,” and Tanya’s brand-new free prompt library for secure coding with AI assistants, SecureMyVibe.ca
    What Mythos-class capabilities mean for the offense/defense asymmetry AppSec has always lived with
    How AI is genuinely changing the SDLC, where it creates lift, where it creates noise, and where it creates entirely new attack surface
    Architecting real defenses at the prompt layer, across MCP servers, and inside RAG pipelines, not just bolting content filters onto the front door
    Why developers are the new attack surface, and why a lot of what gets labeled as “supply chain attacks” lately is really a developer compromise that cascaded into the supply chain
    Tanya’s threat model, defense framework, and maturity model for protecting developers themselves
    DevSec Station, Tanya’s new podcast delivering 5–10 minute secure coding lessons in a format built for how developers actually consume content
    What she’d change tomorrow about how AppSec programs are built and run if she could change just one thing
    This is one of those conversations that ranges from the practical (what to do Monday morning) to the philosophical (what does it even mean to “secure software” when an AI can find more zero-days in a weekend than a Red Team finds in a year). Tanya brings the rare combination of deep technical chops, real teaching ability, and genuine warmth that makes a hard subject feel approachable.
    If you lead an AppSec program, write code for a living, run a security team trying to keep up with AI-assisted development, or you’re just trying to figure out where this whole industry is heading, this is the episode for you.
    Resources from the episode:
    SecureMyVibe
    DevSec Station Podcast (Tanya’s new show)
    She Hacks Purple Consulting
    Alice and Bob Learn Application Security and Alice and Bob Learn Secure Coding
    OWASP Top 10 2025 — https://owasp.org/Top10/2025/
    Claude Mythos Preview System Card — Anthropic
    Thanks for being here. If this episode landed for you, the best thing you can do is share it with one person on your team who’d find it useful, that’s how this newsletter and show grow.
  • Resilient Cyber

    AI and the Future of Secure Coding

    16/04/2026 | 23 mins.
    What happens to application security when AI agents start writing most of the code?
    Jack Cable knows both sides of this problem better than almost anyone. As a Senior Technical Advisor at CISA, he helped architect the Secure by Design initiative that challenged the entire software industry to stop shipping insecure products and expecting customers to clean up the mess. Now, as the founder of Corridor, he's building at the center of a question that didn't exist two years ago: how do you govern, secure, and trust code that no human wrote?
    In this episode, Jack walks us through the journey from federal cybersecurity policy to startup founder, and why he believes we're at an inflection point that makes everything before it look manageable. We talk about why a decade of shift-left never actually fixed the vulnerability backlog, and why the rise of coding agents, Cursor, Claude Code, Codex, and the internal tools enterprises are quietly building, is about to make that backlog look quaint.
    Jack makes the case for a new category he's helping define called Agentic Security Coding Management, and explains what separates it from the SAST tools and ASPM platforms security teams already have. We get into the uncomfortable duality of AI as both the source of the problem and the proposed solution, the frontier labs showing up in AppSec with unclear intentions, and the market confusion that's leaving CISOs struggling to tell real governance from repackaged scanning.
    We spend the back half of the conversation on the hard questions. What does real governance of AI-generated code actually look like when thousands of developers are running agents in parallel? Is it policy enforcement at the agent level, provenance tracking, runtime attestation, or something nobody has built yet? And drawing on his time at CISA, Jack shares where he sees regulation heading: liability frameworks, mandatory disclosure, and what happens if we get the policy either too heavy or too absent at the exact wrong moment.
    Whether you're a CISO trying to get ahead of this, a founder building in the space, or a developer watching your workflow transform in real time, this is the conversation that frames where AppSec goes from here.
  • Resilient Cyber

    Your AI Agent Is Running As Root

    08/04/2026 | 44 mins.
    When you fire up Claude Code, Cursor, or any AI coding agent, it launches with your full system permissions, your SSH keys, cloud credentials, browser passwords, every file on your machine. Most developers never think twice about it.
    Luke Hinds did. And then he built something about it.
    Luke is the creator of Sigstore, the cryptographic signing infrastructure now used by PyPI, Homebrew, GitHub, and Google as the industry standard for software supply chain security. In this episode, he joins Chris to talk about why he's watching the industry make the exact same mistake it made a decade ago, and what he built to try to stop it.
    We cover the full picture: why application-layer guardrails and system prompts fundamentally fail as security boundaries for AI agents (and what kernel-level enforcement actually means), the .md file as an emerging control plane attack surface, the OpenClaw wake-up call and what the skills marketplace ecosystem gets structurally wrong about trust and provenance, the approval fatigue problem and Anthropic's 17% false negative rate on Claude Code's auto-mode classifier, extending SLSA and Sigstore attestation frameworks to AI-generated code, and why LLM-as-a-judge may not be the silver bullet many are hoping for.
    Luke also makes a broader argument about where this is all heading — volumes of AI-generated code growing faster than human capacity to review it, junior engineers being priced out of the industry, and an aging cohort of engineers who can actually read and reason about code at depth. It's a candid, technically grounded conversation from someone who's been in open source security for 20+ years and has seen this movie before.
    nono is at nono.sh, one line to install, one line to run. No excuse not to

More Technology podcasts

About Resilient Cyber

Resilient Cyber brings listeners discussions from a variety of Cybersecurity and Information Technology (IT) Subject Matter Experts (SME) across the Public and Private domains from a variety of industries. As we watch the increased digitalization of our society, striving for a secure and resilient ecosystem is paramount.
Podcast website

Listen to Resilient Cyber, Search Engine and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Resilient Cyber: Podcasts in Family