PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

104 episodes

  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 9th April 2026

    09/04/2026 | 11 mins.
    In this episode for April 9, 2026, Jeremy covers a week dominated by highly sophisticated supply chain attacks and the emergence of "Project Glasswing", an internal Anthropic project revealing that next-gen AI models may be "too good" at finding zero-day vulnerabilities.
    Key Stories & Developments:
    The FBI's IC3 Report: For the first time in 25 years, the FBI has specifically categorized AI-enabled fraud, which accounted for $893 million in losses across BEC, romance, and investment scams.
    Ollama Exposure Spikes: A Shodan scan reveals that publicly exposed Ollama instances have jumped from 1,100 in September 2025 to over 25,000 in April 2026.
    Critical Infrastructure CVEs: Both MLflow and PraisonAI received maximum CVSS scores of 10.0 for flaws allowing unauthenticated code execution and command injection.
    The Axios Supply Chain Heist: In a sophisticated "long con," threat actors (Team PCP) spent weeks building rapport with the Axios project maintainer via a fake Slack workspace. They eventually lured the maintainer into downloading malware, allowing them to inject a Remote Access Trojan (RAT) into a package installed 600,000 times.
    Project Glasswing (Claude Mythos): Leaked documents from Anthropic describe Claude Mythos, a model family with terrifying cybersecurity capabilities. Mythos discovered a 27-year-old bug predating GitHub; currently, 99% of the zero-days it has identified remain unpatched, leading to internal concerns about a controlled rollout.
    Vertex AI Permission Flaw: Unit 42 discovered a flaw in Google Cloud’s Vertex AI that could allow AI agents to bypass security boundaries and access sensitive data.
    Episode Links
    https://securityboulevard.com/2026/04/cyber-fraud-cost-americans-17-billion-in-2025-ai-scams-make-list-fbi/
    https://insecurestack.substack.com/p/eus-exposed-ai-infrastructure
    https://securityonline.info/weekly-vulnerability-digest-april-2026-chrome-zero-day-ai-security/
    https://thehackernews.com/2026/03/vertex-ai-vulnerability-exposes-google.html
    https://fortune.com/2026/04/02/mercor-ai-startup-security-incident-10-billion/
    https://www.sans.org/blog/what-we-learned-axios-npm-supply-chain-compromise-emergency-briefing
    https://techcrunch.com/2026/04/06/north-koreas-hijack-of-one-of-the-webs-most-used-open-source-projects-was-likely-weeks-in-the-making/
    https://thehackernews.com/2026/04/flowise-ai-agent-builder-under-active.html
    https://www.securityweek.com/anthropic-unveils-claude-mythos-a-cybersecurity-breakthrough-that-could-also-supercharge-attacks/
    https://www.staffingindustry.com/news/global-daily-news/mercor-reports-data-breach
    https://red.anthropic.com/2026/mythos-preview/

    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
  • Modern Cyber with Jeremy Snyder

    Joseph Carson of Segura

    09/04/2026 | 35 mins.
    In this annual recap from the sidelines of RSAC 2026, Jeremy is joined by Joseph Carson, Chief Security Evangelist at Segura. They discuss a conference floor that felt more like an AI event than a cybersecurity one, exploring the convergence of agentic AI and identity security. Joseph shares critical insights from the Estonia "Digital Nation" playbook, the growing risk of non-human identities, and why organizations must move from "hope as a strategy" to a proactive resiliency model that assumes physical and digital disruption.
    Key Episode Highlights:
    The AI Convergence: Joseph and Jeremy observe that AI has become the "fuel to the fire" for cybersecurity. While AI helps defenders move at the pace of attackers, it requires rigorous guardrails like least privilege and security by design to be successful.
    Identity of the Machine: A major theme of the conference was non-human identities. Joseph argues that AI agents should never use human credentials but should instead rely on ephemeral, just-in-time (JIT) keys to maintain accountability and limit the blast radius.
    Estonia’s Resiliency Playbook: Joseph details how Estonia transitioned from a target of cyber war to a resilient digital nation. He highlights the use of "Data Embassies"—storing sovereign data in geographically distributed, diplomatically protected locations—to ensure the country can "reboot" even after a total local failure.
    Beyond Cybersecurity to Physical Impacts: The discussion shifts to how attackers are reverting to "cheap" physical disruptions like GPS jamming and cutting undersea data cables when digital defenses become too strong.
    The "Luck" Trap: Referencing the famous Maersk ransomware recovery, Joseph warns that finding a single surviving backup by chance is not a strategy. Organizations must simulate worst-case scenarios, including the loss of their identity provider (IdP) or primary cloud vendor.
    About Joseph
    Joseph Carson is Chief Security Evangelist and Advisory CISO at Segura, where he helps organizations worldwide strengthen identity security and build resilient cyber defense strategies. An award-winning cybersecurity leader with more than three decades of experience, Joe has advised governments, critical infrastructure, and global enterprises. He is the author of Cybersecurity for Dummies, read by over 50,000 professionals, and a regular contributor to leading outlets including The Wall Street Journal and Dark Reading. Joe also hosts the podcast Security by Default and is a frequent keynote speaker on identity and AI-driven threats.
    Episode Links
    Security by Default Podcast: https://open.spotify.com/show/0mzN5M5CkFVLn8fq5TnH0O
    Joseph on LinkedIn: https://www.linkedin.com/in/josephcarson/
    Segura Website: https://segura.security/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 2nd April 2026

    02/04/2026 | 12 mins.
    In this episode of This Week in AI Security for April 2, 2026, Jeremy discusses a "perfect storm" for offensive cyber operations. As AI begins to discover vulnerabilities in legacy software faster than humans can patch them, regulators are sounding the alarm on the "intolerable risks" of AI-generated code.

    Key Stories & Developments:
    The AI-Generated Vulnerability Surge: Georgia Tech’s Vibe Security Radar tracked 35 CVEs in March 2026 alone that were directly attributable to AI-generated code, a sharp increase from just 6 in January.
    NCSC Warning: Richard Horne, head of the UK’s National Cyber Security Centre, warned at RSAC that "vibe coding" currently presents "intolerable risks" for most organizations as software volume is on track to double every 42 months.
    Langflow RCE Exploited: CISA has added a critical unauthenticated remote code execution (RCE) flaw in Langflow to its Known Exploited Vulnerabilities catalog.
    "MAD" Bugs in Legacy Tools: The "Month of AI Discovered Bugs" initiative utilized LLMs to find critical zero-day RCE vulnerabilities in decades-old tools like Vim and GNU Emacs.
    The Claude Mythos Leak: Anthropic confirmed a major leak of unpublished assets related to its next-generation model, Claude Mythos, following a content management system misconfiguration.
    Offensive AI Multiplier: Hacker crew Team PCP claimed in Forbes that they are using AI-powered automated agents to turbocharge attacks on developer tools and repository infrastructures.

    Episode Links
    https://www.forbes.com/sites/ronschmelzer/2026/03/27/major-security-breach-of-critical-ai-dependency-exposes-cloud-secrets/
    https://threatprotect.qualys.com/2026/03/26/cisa-added-langflow-vulnerability-to-its-known-exploited-vulnerabilities-catalog-cve-2026-33017/
    https://siliconangle.com/2026/03/30/openai-codex-vulnerability-enabled-github-token-theft-via-command-injection-report-finds/
    https://www.infosecurity-magazine.com/news/ai-generated-code-vulnerabilities/
    https://www.itpro.com/security/ncsc-warns-vibe-coding-poses-a-major-risk
    https://www.forbes.com/sites/thomasbrewster/2026/03/26/hackers-launch-devastating-attacks-on-ai-devs/
    https://markaicode.com/prompt-injection-attacks-ai-security-2026/
    https://cyberscoop.com/ai-cyberattacks-two-years-insane-vulnerabilities-kevin-mandia-alex-stamos-morgan-adamski-rsac-2026/
    https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities/
    https://cyberwebspider.com/cyber-security-news/ai-critical-rce-flaws-vim-emacs/

    Worried about AI security?
    Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 26th March 2026

    26/03/2026 | 14 mins.
    In the latest episode of This Week in AI Security, Jeremy reports live from the sidelines of RSA in San Francisco. The week is defined by "gullible" AI agents, legal precedents for chatbot liability, and a massive supply chain attack targeting the tools developers use to build AI applications.
    Key Stories & Developments:
    The "Minion" Problem: Zenity researchers demonstrated zero-click exploits against Cursor, Salesforce Einstein, ChatGPT, and Copilot, arguing that prompt injection should be reframed as "persuasion" vectors that turn agents into malicious minions.
    The $10M Discount Fabrication: A red teaming analysis of over 50 customer-facing AI agents found that "persuading" chatbots could lead to the fabrication of $10 million in unauthorized service discounts and commitments.
    Legal Precedent, Air Canada Liable: The British Columbia Civil Resolution Tribunal ruled that Air Canada is legally liable for the incorrect advice given by its chatbot, setting a major precedent for corporate AI accountability.
    Meta’s Internal "Sev 1" Fail: A Meta engineer’s internal AI agent autonomously posted incorrect advice on a forum without human approval, leading to a massive inadvertent exposure of company data.
    LLM Fingerprinting: New academic research shows that attackers can now fingerprint which specific LLM is in use by observing traffic patterns, allowing them to target the specific vulnerabilities (like the "Grandma" exploit) unique to that model.
    The LiteLLM Supply Chain Attack: In the biggest story of the week, a threat actor group called Team TCP compromised Trivy and used it to harvest credentials to poison LiteLLM on PyPI. Malicious versions (downloaded millions of times daily) were live for three hours, delivering a Kubernetes worm and credential harvester.

    Episode Links
    https://www.theregister.com/2026/03/23/pwning_everyones_ai_agents/
    https://cybercory.com/2026/03/19/claudy-day-exposes-hidden-risks-prompt-injection-flaw-in-claude-ai-enables-silent-data-exfiltration/
    https://www.generalanalysis.com/blog/adversarial_analysis_customer_service_agents
    https://www.cve.org/CVERecord?id=CVE-2026-33068
    https://medium.com/@cbchhaya/making-prompt-injection-harder-against-ai-coding-agents-f4719c083a5c
    https://aiautomationglobal.com/blog/ransomware-ai-agents-enterprise-cybersecurity-2026
    https://techcrunch.com/2026/03/18/meta-is-having-trouble-with-rogue-ai-agents/
    https://arxiv.org/html/2510.07176v1
    https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know
    https://securityboulevard.com/2026/03/colorado-moves-to-revise-its-landmark-ai-law-after-industry-pushback/
    https://securitylabs.datadoghq.com/articles/litellm-compromised-pypi-teampcp-supply-chain-campaign/
  • Modern Cyber with Jeremy Snyder

    Ann Dunkin of Georgia Tech

    24/03/2026 | 37 mins.
    In this episode of Modern Cyber, Jeremy sits down with Ann Dunkin, former CIO of the U.S. Department of Energy, to discuss the critical infrastructure that powers our digital lives. As data centers and AI drive unprecedented demand on the energy grid,
    Ann explains why "aging infrastructure" isn't always the biggest cyber risk, how the U.S. grid is actually structured (including the isolation of Texas), and why security leaders must move from "check-the-box" compliance to active risk management.
    Key Episode Highlights:
    The AI Power Surge: For decades, grid demand was flat; now, AI and data centers are driving a massive growth in load that the aging infrastructure was never designed to handle.
    The "Air Gap" Myth: While older nuclear plants are safely analog, modern grid vulnerabilities live in the "two-way" traffic of IoT devices and smart meters that were never meant to be internet-connected.
    Nation-State Threats: The primary concern for grid security is a nation-state actor gaining a foothold to cause long-term, physically destructive disruptions as a prelude to kinetic war.
    Compliance vs. Risk: Ann shares her experience in the Biden-Harris administration, emphasizing that "table stakes" compliance isn't enough—leaders must use risk registers and tabletop exercises to educate boards on true threats.
    About Ann
    Ann Dunkin is an External Fellow and Distinguished Professor of the Practice at the Georgia Institute of Technology. She is also the CEO of Dunkin Global Advisors, providing strategic business advice to companies of all sizes as well as fractional CIO services. She serves as an independent director on the governing board of Global Interconnection group and the advisory boards for Bowtie Security, Openpolicy and CGAI.
    Episode Links
    Ann Dunkin at Georgia Tech: https://research.gatech.edu/people/ann-dunkin
    Ann Dunkin on LinkedIn: https://www.linkedin.com/in/anndunkin/

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, Prof G Markets and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family