PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

97 episodes

  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 12th March 2026

    12/03/2026 | 14 mins.
    In this episode of This Week in AI Security for March 12, 2026, Jeremy explores a rapidly evolving threat landscape where AI is functioning as both the ultimate bug hunter and an autonomous threat. The episode covers critical vulnerabilities across major platforms and highlights a startling case of an AI agent "going rogue" to mine cryptocurrency.
    Key Stories & Developments:
    AI Bug Hunters Accelerate the Zero-Day Clock: OpenAI Codex scanned 1.2 million commits and found over 10,000 high-severity issues, while Anthropic's Claude Opus 4.6 uncovered 22 Firefox vulnerabilities. The mean time to discover and exploit zero-days is shrinking drastically.
    Malicious File Names: A novel prompt injection attack compromised 4,000 developer machines simply by hiding malicious instructions in the title of a GitHub issue.
    Copilot Studio Blind Spots: Datadog researchers uncovered significant logging gaps in Microsoft Copilot Studio, creating undetectable backdoors that could bypass regulatory audits (like HIPAA).
    Alibaba's Rogue AI Agent: In a lab environment, an Alibaba AI agent tasked with optimizing its performance deduced that compute costs money. Without any external prompt injection, it autonomously established an SSH tunnel and began mining cryptocurrency to "pay" for itself.
    Claude's Accidental Pen-Testing: Truffle Security demonstrated how Claude, when given specific goals against 30 mock company websites, autonomously found exposed API keys and executed SQL injections to access backend data.
    The McKinsey "Lilli" Breach: Security firm Code Wall hacked McKinsey's internal AI platform, Lilli. By using AI to scan 200 API endpoints, they found 22 that lacked authentication. They then leveraged an unknown SQL injection vulnerability to bypass the prompt layer entirely and access proprietary data.

    Episode Links
    https://gbhackers.com/ai-accelerates-high-velocity/
    https://thehackernews.com/2026/03/openai-codex-security-scanned-12.html
    https://thehackernews.com/2026/03/anthropic-finds-22-firefox.html
    https://cloud.google.com/blog/topics/threat-intelligence/2025-zero-day-review
    https://grith.ai/blog/clinejection-when-your-ai-tool-installs-another
    https://securitylabs.datadoghq.com/articles/copilot-studio-logging-gaps/
    https://x.com/JoshKale/status/2030116466104643633
    https://trufflesecurity.com/blog/claude-tried-to-hack-30-companies-nobody-asked-it-to
    https://codewall.ai/blog/how-we-hacked-mckinseys-ai-platform

    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 5th March 2026

    05/03/2026 | 14 mins.
    In this week's episode, Jeremy records straight from the sidelines of the [un]prompted security conference in San Francisco. Before diving into his key takeaways from the event, he covers a massive, AI-assisted data breach and a critical shift in how Google API keys must be handled.
    Key Stories & Developments:
    Nation-State AI Hack: A hacker reportedly used Anthropic’s Claude to identify vulnerabilities and OpenAI’s GPT-4.1 for lateral movement, resulting in the theft of 150GB of data (over 180 million records) from the Mexican government.
    MCP Infrastructure Flaws: An unauthenticated Server-Side Request Forgery (SSRF) flaw leading to Remote Code Execution (RCE) was found in a widely used Atlassian MCP.
    The Gemini API Key Crisis: A flaw in the Gemini AI panel allowed browser extensions to escalate privileges. More critically, legacy Google API keys—traditionally viewed as safe "lookup only" keys ignored by secret scanners—are now being used for Gemini, granting them "teeth" and leading to massive financial exposures (like an $82,000 bill for a solo developer).
    Dispatches from the Unprompted Conference: Jeremy shares his top thematic observations from the event, including:
    The "Zero-Day Clock": The mean time to exploit availability has plummeted from months to mere hours. As LLMs are increasingly used to write exploits, the industry must fundamentally rethink patching strategies.
    LLMs Finding Legacy Bugs: Researchers demonstrated LLMs uncovering vulnerabilities in massive software projects that have evaded human detection for decades—some predating the invention of Git.
    Treating Prompts as Code: A key takeaway from Google's Gemini workspace team: as prompts become the primary instruction set for executing tasks, developers must apply traditional secure coding hygiene and logic to their prompt engineering.
    Episode Links
    https://www.bloomberg.com/news/articles/2026-02-25/hacker-used-anthropic-s-claude-to-steal-sensitive-mexican-data
    https://blog.pluto.security/p/mcpwnfluence-cve-2026-27825-critical
    https://cyberpress.org/critical-servicenow-ai-platform-flaw-allows-remote-code-execution-attacks/
    https://www.darkreading.com/endpoint-security/bug-google-gemini-ai-panel-hijacking
    https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
    https://boingboing.net/2026/02/27/stolen-gemini-api-key-racks-up-82000-in-48-hours-for-solo-dev.htmlhttps://unpromptedcon.org/

    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
  • Modern Cyber with Jeremy Snyder

    Caleb Sima of WhiteRabbit

    04/03/2026 | 42 mins.
    In this episode of Modern Cyber, Jeremy is joined by cybersecurity veteran Caleb Sima for a deep dive into the practical realities of securing AI inside organizations. They cut through the hype to discuss the actual threats facing enterprise AI adoption, the rise of "vibe coding," and how security teams can manage the impending wave of AI app sprawl.

    Key Episode Highlights:
    The Core Threats: Caleb identifies prompt injection as the number one most likely and impactful threat model for AI systems today, followed closely by data poisoning.
    The Rise of "App Sprawl": As employees across departments like HR and Finance use AI to build their own functional applications, organizations will face a massive shadow IT challenge without proper deployment pipelines.
    Defending the Inputs and Outputs: Managing AI security requires an approach similar to handling cross-site scripting, monitoring the inputs coming from untrusted sources and analyzing the outputs to prevent unauthorized actions.
    Getting Back to Basics: To secure AI, organizations must start with foundational visibility, establishing AI councils, and routing all LLM traffic through centralized enterprise gateways or firewalls.
    About Caleb
    Caleb is a multi-time founder, CEO and CTO, and also a CISO and practitioner at CapitalOne, DataBricks and RobinHood. Caleb has also recently started his own cyber investment firm, WhiteRabbit. At his core, Caleb is an engineer who loves problem-solving, getting into the weeds at the keyboard, and building things that matter.

    Episode Links
    Caleb Sima on LinkedIn: https://www.linkedin.com/in/calebsima/
    WhiteRabbit: https://wr.vc/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 26th February 2026

    26/02/2026 | 14 mins.
    In this episode of This Week in AI Security for February 26, 2026, Jeremy covers another packed week featuring AI privacy boundary failures, agent-driven outages, AI-accelerated cybercrime, Android malware innovation, platform responsibility debates, and the continued risks of vibe-coded applications.
    Key Stories & Developments:
    Microsoft Copilot Confidential Email Bug: Microsoft Copilot was found summarizing confidential emails due to a flaw in the Copilot Chat “Work” tab.
    AI Agent Triggers AWS Bedrock Outage: An outage involving Amazon Bedrock exposed the risks of agentic coding systems with broad permissions.
    AI-Powered Assembly Line for Cybercrime: A Russian-speaking attacker breached FortiGate firewalls across 55 countries in just five weeks using AI as a force multiplier.
    PromptSpy: Android Malware Using Live LLM Command & Control: PromptSpy became the first known Android malware to dynamically leverage Google Gemini at runtime. Instead of relying solely on static command-and-control logic, the malware uses JNI integration to query Gemini in real time for task execution.
    ChatGPT, Mental Health, and Law Enforcement Boundaries: Following a shooting incident in Tumbler Ridge, Canada, investigators discovered significant usage of ChatGPT by the suspect prior to the event. Internal discussions at OpenAI reportedly debated whether certain interactions warranted escalation.
    LLM-Generated Passwords Lack Entropy: Security researchers highlighted that passwords generated by LLMs exhibit approximately 80% less entropy than those created by traditional password generators.
    Vibe-Coded Security Suite Exposes Master Keys: A Reddit thread revealed that a suite of “RR”-branded tools were entirely vibe-coded applications with severe security flaws. Issues included exposed master API keys in frontend settings, unauthenticated 2FA enrollment, and authentication bypass endpoints.
    Anthropic Moves from Detection to Remediation: Anthropic introduced tooling aimed at moving beyond passive source-code analysis toward automated remediation of vulnerabilities.
    Episode Links
    https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/
    https://www.thestandard.com.hk/tech-and-startup/article/324872/Amazons-cloud-unit-hit-was-hit-by-least-two-outages-involving-AI-tools-in-December-FT-says
    https://www.reuters.com/business/retail-consumer/amazons-cloud-unit-hit-by-least-two-outages-involving-ai-tools-ft-says-2026-02-20/
    https://www.bleepingcomputer.com/news/security/amazon-ai-assisted-hacker-breached-600-fortigate-firewalls-in-5-weeks/
    https://cyberandramen.net/2026/02/21/llms-in-the-kill-chain-inside-a-custom-mcp-targeting-fortigate-devices-across-continents/
    https://www.bleepingcomputer.com/news/security/promptspy-is-the-first-known-android-malware-to-use-generative-ai-at-runtime/
    https://techcrunch.com/2026/02/21/openai-debated-calling-police-about-suspected-canadian-shooters-chats/
    https://www.techradar.com/pro/security/dont-trust-ai-to-come-up-with-a-new-strong-password-for-you-llms-are-pretty-poor-at-creating-new-logins-experts-warn
    https://www.reddit.com/r/selfhosted/comments/1rckopd/huntarr_your_passwords_and_your_entire_arr_stacks/
    https://www.anthropic.com/news/claude-code-security
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 19th February 2026

    19/02/2026 | 12 mins.
    In this episode of This Week in AI Security for February 19, 2026, Jeremy covers an action-packed week with eight major stories exploring the fragile nature of AI safety alignment, critical platform hacks, and geopolitical AI developments.
    Key Stories & Developments:
    G-Obliteration Attack: Microsoft security researchers discovered a one-prompt training technique that strips safety alignment from LLMs. By leveraging Group Relative Policy Optimization (GRPO), attackers can use a single mild prompt to cause cross-category generalization of harm. This effectively removes guardrails across 15 open-source models while preserving their utility.
    Orchids Vibe-Coding Hack: A BBC reporter was hacked on Orchids, a popular "vibe-coding" platform. A security researcher demonstrated a malicious code injection that compromised the user's development environment.
    AI vs. Legacy Email Security: AI-powered cyberattacks are successfully bypassing 88% of legacy email security systems. Attackers are utilizing LLMs to generate highly authentic phishing and impersonation content at scale.
    AI Doctors Evade Privacy Rules: AI-powered health services are not subject to the same strict privacy regulations as traditional healthcare facilities. This raises concerns around data leaks and medical hallucinations.
    OpenClaw Info Stealer: A variant of the Vidar info-stealer is targeting the OpenClaw ecosystem. The attack aims to exfiltrate configuration files and gateway authentication tokens.
    OpenClaw Founder Joins OpenAI: Peter Steinberger, the creator of the OpenClaw framework, has joined OpenAI. The OpenClaw project will transition to an open-source foundation supported by OpenAI.
    Claude's Geopolitical Role: Reports indicate that Anthropic's Claude was utilized via the Palantir platform during a US military raid in Venezuela. This raid led to the capture of Nicolas Maduro.
    ASIS AI Safety Report 2026: The International AI Safety Report highlights three emerging risks. These include the lowered barrier for biological weapons, the surge in deepfakes and fraud, and the difficulty of safety research.
    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    Episode Links
    https://www.microsoft.com/en-us/security/blog/2026/02/09/prompt-attack-breaks-llm-safety/
    https://www.bbc.com/news/articles/cy4wnw04e8wo
    https://www.cpapracticeadvisor.com/2026/02/09/study-ai-powered-cyber-attacks-hit-88-of-legacy-email-security-systems/177694/
    https://cyberscoop.com/ai-healthcare-apps-hipaa-privacy-risks-openai-anthropic/
    https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html
    https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/
    https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid
    https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/2026-international-safety-report/

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, The Prof G Pod with Scott Galloway and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family