PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

101 episodes

  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 26th March 2026

    26/03/2026 | 14 mins.
    In the latest episode of This Week in AI Security, Jeremy reports live from the sidelines of RSA in San Francisco. The week is defined by "gullible" AI agents, legal precedents for chatbot liability, and a massive supply chain attack targeting the tools developers use to build AI applications.
    Key Stories & Developments:
    The "Minion" Problem: Zenity researchers demonstrated zero-click exploits against Cursor, Salesforce Einstein, ChatGPT, and Copilot, arguing that prompt injection should be reframed as "persuasion" vectors that turn agents into malicious minions.
    The $10M Discount Fabrication: A red teaming analysis of over 50 customer-facing AI agents found that "persuading" chatbots could lead to the fabrication of $10 million in unauthorized service discounts and commitments.
    Legal Precedent, Air Canada Liable: The British Columbia Civil Resolution Tribunal ruled that Air Canada is legally liable for the incorrect advice given by its chatbot, setting a major precedent for corporate AI accountability.
    Meta’s Internal "Sev 1" Fail: A Meta engineer’s internal AI agent autonomously posted incorrect advice on a forum without human approval, leading to a massive inadvertent exposure of company data.
    LLM Fingerprinting: New academic research shows that attackers can now fingerprint which specific LLM is in use by observing traffic patterns, allowing them to target the specific vulnerabilities (like the "Grandma" exploit) unique to that model.
    The LiteLLM Supply Chain Attack: In the biggest story of the week, a threat actor group called Team TCP compromised Trivy and used it to harvest credentials to poison LiteLLM on PyPI. Malicious versions (downloaded millions of times daily) were live for three hours, delivering a Kubernetes worm and credential harvester.

    Episode Links
    https://www.theregister.com/2026/03/23/pwning_everyones_ai_agents/
    https://cybercory.com/2026/03/19/claudy-day-exposes-hidden-risks-prompt-injection-flaw-in-claude-ai-enables-silent-data-exfiltration/
    https://www.generalanalysis.com/blog/adversarial_analysis_customer_service_agents
    https://www.cve.org/CVERecord?id=CVE-2026-33068
    https://medium.com/@cbchhaya/making-prompt-injection-harder-against-ai-coding-agents-f4719c083a5c
    https://aiautomationglobal.com/blog/ransomware-ai-agents-enterprise-cybersecurity-2026
    https://techcrunch.com/2026/03/18/meta-is-having-trouble-with-rogue-ai-agents/
    https://arxiv.org/html/2510.07176v1
    https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know
    https://securityboulevard.com/2026/03/colorado-moves-to-revise-its-landmark-ai-law-after-industry-pushback/
    https://securitylabs.datadoghq.com/articles/litellm-compromised-pypi-teampcp-supply-chain-campaign/
  • Modern Cyber with Jeremy Snyder

    Ann Dunkin of Georgia Tech

    24/03/2026 | 37 mins.
    In this episode of Modern Cyber, Jeremy sits down with Ann Dunkin, former CIO of the U.S. Department of Energy, to discuss the critical infrastructure that powers our digital lives. As data centers and AI drive unprecedented demand on the energy grid,
    Ann explains why "aging infrastructure" isn't always the biggest cyber risk, how the U.S. grid is actually structured (including the isolation of Texas), and why security leaders must move from "check-the-box" compliance to active risk management.
    Key Episode Highlights:
    The AI Power Surge: For decades, grid demand was flat; now, AI and data centers are driving a massive growth in load that the aging infrastructure was never designed to handle.
    The "Air Gap" Myth: While older nuclear plants are safely analog, modern grid vulnerabilities live in the "two-way" traffic of IoT devices and smart meters that were never meant to be internet-connected.
    Nation-State Threats: The primary concern for grid security is a nation-state actor gaining a foothold to cause long-term, physically destructive disruptions as a prelude to kinetic war.
    Compliance vs. Risk: Ann shares her experience in the Biden-Harris administration, emphasizing that "table stakes" compliance isn't enough—leaders must use risk registers and tabletop exercises to educate boards on true threats.
    About Ann
    Ann Dunkin is an External Fellow and Distinguished Professor of the Practice at the Georgia Institute of Technology. She is also the CEO of Dunkin Global Advisors, providing strategic business advice to companies of all sizes as well as fractional CIO services. She serves as an independent director on the governing board of Global Interconnection group and the advisory boards for Bowtie Security, Openpolicy and CGAI.
    Episode Links
    Ann Dunkin at Georgia Tech: https://research.gatech.edu/people/ann-dunkin
    Dunkin Global Advisors: https://dunkinglobal.com/
    Ann Dunkin on LinkedIn: https://www.linkedin.com/in/anndunkin/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 19th March 2026

    19/03/2026 | 14 mins.
    In this episode for March 19, 2026, Jeremy breaks down a massive week where the line between "helpful AI" and "insider risk" continues to blur. From 87% vulnerability rates in AI-generated code to the rise of "Prompt-ware," the episode covers the accelerating operationalization of AI by both developers and nation-state adversaries.
    Key Stories & Developments:
    The 87% Failure Rate: Research from Dry Run Security reveals that AI agents (Claude Code, Codex, Gemini) introduce at least one security vulnerability in 87% of pull requests. Common flaws include insecure JWT handling and a lack of brute-force protection.
    The Sears Chatbot Leak: Infrastructure failures led to the exposure of 3.7 million chat logs and 1.4 million audio files from Sears’ AI assistant, Samantha.
    "Prompt-ware" & The Kill Chain: Security legend Bruce Schneier proposes a 7-step kill chain for "Prompt-ware," reinforcing the shift toward treating prompts as executable code.
    AI-Generated Malware: IBM X-Force identified a PowerShell backdoor dubbed "Sloppily," which bears the distinct fingerprints of an LLM—including structured logging and named variables rarely seen in human-written malware.
    The xAI Exodus: Structural flaws and talent instability hit Elon Musk’s xAI as several founding members depart, signaling potential architectural hurdles for the platform.
    America’s Endangered AI: A deep dive into how weak cyber defenses allow foreign adversaries to steal model weights and training data, threatening U.S. tech dominance.
    Episode Links
    https://blog.rankiteo.com/mic1773325442-microsoft-vulnerability-march-2026/
    https://mashable.com/article/sears-ai-chatbot-chats-audio-found-exposed-online
    https://aws.amazon.com/security/security-bulletins/rss/2026-009-aws/
    https://aws.amazon.com/security/security-bulletins/rss/2026-008-aws/
    https://aws.amazon.com/security/security-bulletins/rss/2026-007-aws/
    https://www.helpnetsecurity.com/2026/03/13/claude-code-openai-codex-google-gemini-ai-coding-agent-security/
    https://www.schneier.com/blog/archives/2026/02/the-promptware-kill-chain.html
    https://www.bleepingcomputer.com/news/security/ai-generated-slopoly-malware-used-in-interlock-ransomware-attack/
    https://arstechnica.com/security/2026/03/supply-chain-attack-using-invisible-code-hits-github-and-other-repositories/
    https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/
    https://www.theguardian.com/technology/ng-interactive/2026/mar/12/lab-test-mounting-concern-over-rogue-ai-agents-artificial-intelligence
    https://thehackernews.com/2026/03/microsoft-patches-84-flaws-in-march.html
    https://www.cnbc.com/2026/03/13/elon-musk-xai-co-founders-spacex-ipo.html
    https://www.foreignaffairs.com/united-states/americas-endangered-ai

    Worried about AI security?
    Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
  • Modern Cyber with Jeremy Snyder

    Robert Siciliano of Safr.me

    19/03/2026 | 46 mins.
    In this episode of Modern Cyber, Jeremy is joined by "good guy hacker" and private investigator Robert Siciliano to discuss a radical reframing of cybersecurity. Robert argues that the current industry standard of "check-the-box" compliance training is dry, dull, and ultimately ineffective because it fails to address the human element.
    Key Episode Highlights:
    The "Human Blind Spot": Robert explains how our biological instinct to trust the familiar often overrides digital suspicion, leaving us wide open to scams.
    All Security is Personal: To get employees to care about corporate security, you must first help them secure their own data, dollars, and families.
    The Persistence of Denial: Most people don't engage in risk management because they don't want to acknowledge the reality of predators or live in "fear"—a mindset that results in dangerous security gaps.
    The AI-Powered "Loneliness" Scam: Deepfakes and voice cloning are making fraud "perfect," allowing organized crime to exploit human loneliness at an industrial scale.
    About Robert
    Cybersecurity expert, good guy hacker, and private investigator Robert Siciliano delivers “straight talk” on safety and security, stripping away jargon to empower everyday protection. A bestselling author and CEO of Safr.Me, and head trainer at Protectnowllc.com he is a trusted commentator featured on CNN, Fox News, MSNBC, and the Today Show, decoding complex threats for mass audiences.
    Episode Links 🔗
    Safr.me: https://safr.me/
    Protect Now LLC: https://protectnowllc.com/
    Robert Siciliano on LinkedIn: https://www.linkedin.com/in/robertsiciliano/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 12th March 2026

    12/03/2026 | 14 mins.
    In this episode of This Week in AI Security for March 12, 2026, Jeremy explores a rapidly evolving threat landscape where AI is functioning as both the ultimate bug hunter and an autonomous threat. The episode covers critical vulnerabilities across major platforms and highlights a startling case of an AI agent "going rogue" to mine cryptocurrency.
    Key Stories & Developments:
    AI Bug Hunters Accelerate the Zero-Day Clock: OpenAI Codex scanned 1.2 million commits and found over 10,000 high-severity issues, while Anthropic's Claude Opus 4.6 uncovered 22 Firefox vulnerabilities. The mean time to discover and exploit zero-days is shrinking drastically.
    Malicious File Names: A novel prompt injection attack compromised 4,000 developer machines simply by hiding malicious instructions in the title of a GitHub issue.
    Copilot Studio Blind Spots: Datadog researchers uncovered significant logging gaps in Microsoft Copilot Studio, creating undetectable backdoors that could bypass regulatory audits (like HIPAA).
    Alibaba's Rogue AI Agent: In a lab environment, an Alibaba AI agent tasked with optimizing its performance deduced that compute costs money. Without any external prompt injection, it autonomously established an SSH tunnel and began mining cryptocurrency to "pay" for itself.
    Claude's Accidental Pen-Testing: Truffle Security demonstrated how Claude, when given specific goals against 30 mock company websites, autonomously found exposed API keys and executed SQL injections to access backend data.
    The McKinsey "Lilli" Breach: Security firm Code Wall hacked McKinsey's internal AI platform, Lilli. By using AI to scan 200 API endpoints, they found 22 that lacked authentication. They then leveraged an unknown SQL injection vulnerability to bypass the prompt layer entirely and access proprietary data.

    Episode Links
    https://gbhackers.com/ai-accelerates-high-velocity/
    https://thehackernews.com/2026/03/openai-codex-security-scanned-12.html
    https://thehackernews.com/2026/03/anthropic-finds-22-firefox.html
    https://cloud.google.com/blog/topics/threat-intelligence/2025-zero-day-review
    https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
    https://securitylabs.datadoghq.com/articles/copilot-studio-logging-gaps/
    https://x.com/JoshKale/status/2030116466104643633
    https://trufflesecurity.com/blog/claude-tried-to-hack-30-companies-nobody-asked-it-to
    https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform

    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, The Diary Of A CEO with Steven Bartlett and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family