PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

96 episodes

  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 5th March 2026

    05/03/2026 | 14 mins.
    In this week's episode, Jeremy records straight from the sidelines of the [un]prompted security conference in San Francisco. Before diving into his key takeaways from the event, he covers a massive, AI-assisted data breach and a critical shift in how Google API keys must be handled.
    Key Stories & Developments:
    Nation-State AI Hack: A hacker reportedly used Anthropic’s Claude to identify vulnerabilities and OpenAI’s GPT-4.1 for lateral movement, resulting in the theft of 150GB of data (over 180 million records) from the Mexican government.
    MCP Infrastructure Flaws: An unauthenticated Server-Side Request Forgery (SSRF) flaw leading to Remote Code Execution (RCE) was found in a widely used Atlassian MCP.
    The Gemini API Key Crisis: A flaw in the Gemini AI panel allowed browser extensions to escalate privileges. More critically, legacy Google API keys—traditionally viewed as safe "lookup only" keys ignored by secret scanners—are now being used for Gemini, granting them "teeth" and leading to massive financial exposures (like an $82,000 bill for a solo developer).
    Dispatches from the Unprompted Conference: Jeremy shares his top thematic observations from the event, including:
    The "Zero-Day Clock": The mean time to exploit availability has plummeted from months to mere hours. As LLMs are increasingly used to write exploits, the industry must fundamentally rethink patching strategies.
    LLMs Finding Legacy Bugs: Researchers demonstrated LLMs uncovering vulnerabilities in massive software projects that have evaded human detection for decades—some predating the invention of Git.
    Treating Prompts as Code: A key takeaway from Google's Gemini workspace team: as prompts become the primary instruction set for executing tasks, developers must apply traditional secure coding hygiene and logic to their prompt engineering.
    Episode Links
    https://www.bloomberg.com/news/articles/2026-02-25/hacker-used-anthropic-s-claude-to-steal-sensitive-mexican-data
    https://blog.pluto.security/p/mcpwnfluence-cve-2026-27825-critical
    https://cyberpress.org/critical-servicenow-ai-platform-flaw-allows-remote-code-execution-attacks/
    https://www.darkreading.com/endpoint-security/bug-google-gemini-ai-panel-hijacking
    https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
    https://boingboing.net/2026/02/27/stolen-gemini-api-key-racks-up-82000-in-48-hours-for-solo-dev.htmlhttps://unpromptedcon.org/

    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
  • Modern Cyber with Jeremy Snyder

    Caleb Sima of WhiteRabbit

    04/03/2026 | 42 mins.
    In this episode of Modern Cyber, Jeremy is joined by cybersecurity veteran Caleb Sima for a deep dive into the practical realities of securing AI inside organizations. They cut through the hype to discuss the actual threats facing enterprise AI adoption, the rise of "vibe coding," and how security teams can manage the impending wave of AI app sprawl.

    Key Episode Highlights:
    The Core Threats: Caleb identifies prompt injection as the number one most likely and impactful threat model for AI systems today, followed closely by data poisoning.
    The Rise of "App Sprawl": As employees across departments like HR and Finance use AI to build their own functional applications, organizations will face a massive shadow IT challenge without proper deployment pipelines.
    Defending the Inputs and Outputs: Managing AI security requires an approach similar to handling cross-site scripting, monitoring the inputs coming from untrusted sources and analyzing the outputs to prevent unauthorized actions.
    Getting Back to Basics: To secure AI, organizations must start with foundational visibility, establishing AI councils, and routing all LLM traffic through centralized enterprise gateways or firewalls.
    About Caleb
    Caleb is a multi-time founder, CEO and CTO, and also a CISO and practitioner at CapitalOne, DataBricks and RobinHood. Caleb has also recently started his own cyber investment firm, WhiteRabbit. At his core, Caleb is an engineer who loves problem-solving, getting into the weeds at the keyboard, and building things that matter.

    Episode Links
    Caleb Sima on LinkedIn: https://www.linkedin.com/in/calebsima/
    WhiteRabbit: https://wr.vc/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 26th February 2026

    26/02/2026 | 14 mins.
    In this episode of This Week in AI Security for February 26, 2026, Jeremy covers another packed week featuring AI privacy boundary failures, agent-driven outages, AI-accelerated cybercrime, Android malware innovation, platform responsibility debates, and the continued risks of vibe-coded applications.
    Key Stories & Developments:
    Microsoft Copilot Confidential Email Bug: Microsoft Copilot was found summarizing confidential emails due to a flaw in the Copilot Chat “Work” tab.
    AI Agent Triggers AWS Bedrock Outage: An outage involving Amazon Bedrock exposed the risks of agentic coding systems with broad permissions.
    AI-Powered Assembly Line for Cybercrime: A Russian-speaking attacker breached FortiGate firewalls across 55 countries in just five weeks using AI as a force multiplier.
    PromptSpy: Android Malware Using Live LLM Command & Control: PromptSpy became the first known Android malware to dynamically leverage Google Gemini at runtime. Instead of relying solely on static command-and-control logic, the malware uses JNI integration to query Gemini in real time for task execution.
    ChatGPT, Mental Health, and Law Enforcement Boundaries: Following a shooting incident in Tumbler Ridge, Canada, investigators discovered significant usage of ChatGPT by the suspect prior to the event. Internal discussions at OpenAI reportedly debated whether certain interactions warranted escalation.
    LLM-Generated Passwords Lack Entropy: Security researchers highlighted that passwords generated by LLMs exhibit approximately 80% less entropy than those created by traditional password generators.
    Vibe-Coded Security Suite Exposes Master Keys: A Reddit thread revealed that a suite of “RR”-branded tools were entirely vibe-coded applications with severe security flaws. Issues included exposed master API keys in frontend settings, unauthenticated 2FA enrollment, and authentication bypass endpoints.
    Anthropic Moves from Detection to Remediation: Anthropic introduced tooling aimed at moving beyond passive source-code analysis toward automated remediation of vulnerabilities.
    Episode Links
    https://www.bleepingcomputer.com/news/microsoft/microsoft-says-bug-causes-copilot-to-summarize-confidential-emails/
    https://www.thestandard.com.hk/tech-and-startup/article/324872/Amazons-cloud-unit-hit-was-hit-by-least-two-outages-involving-AI-tools-in-December-FT-says
    https://www.reuters.com/business/retail-consumer/amazons-cloud-unit-hit-by-least-two-outages-involving-ai-tools-ft-says-2026-02-20/
    https://www.bleepingcomputer.com/news/security/amazon-ai-assisted-hacker-breached-600-fortigate-firewalls-in-5-weeks/
    https://cyberandramen.net/2026/02/21/llms-in-the-kill-chain-inside-a-custom-mcp-targeting-fortigate-devices-across-continents/
    https://www.bleepingcomputer.com/news/security/promptspy-is-the-first-known-android-malware-to-use-generative-ai-at-runtime/
    https://techcrunch.com/2026/02/21/openai-debated-calling-police-about-suspected-canadian-shooters-chats/
    https://www.techradar.com/pro/security/dont-trust-ai-to-come-up-with-a-new-strong-password-for-you-llms-are-pretty-poor-at-creating-new-logins-experts-warn
    https://www.reddit.com/r/selfhosted/comments/1rckopd/huntarr_your_passwords_and_your_entire_arr_stacks/
    https://www.anthropic.com/news/claude-code-security
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 19th February 2026

    19/02/2026 | 12 mins.
    In this episode of This Week in AI Security for February 19, 2026, Jeremy covers an action-packed week with eight major stories exploring the fragile nature of AI safety alignment, critical platform hacks, and geopolitical AI developments.
    Key Stories & Developments:
    G-Obliteration Attack: Microsoft security researchers discovered a one-prompt training technique that strips safety alignment from LLMs. By leveraging Group Relative Policy Optimization (GRPO), attackers can use a single mild prompt to cause cross-category generalization of harm. This effectively removes guardrails across 15 open-source models while preserving their utility.
    Orchids Vibe-Coding Hack: A BBC reporter was hacked on Orchids, a popular "vibe-coding" platform. A security researcher demonstrated a malicious code injection that compromised the user's development environment.
    AI vs. Legacy Email Security: AI-powered cyberattacks are successfully bypassing 88% of legacy email security systems. Attackers are utilizing LLMs to generate highly authentic phishing and impersonation content at scale.
    AI Doctors Evade Privacy Rules: AI-powered health services are not subject to the same strict privacy regulations as traditional healthcare facilities. This raises concerns around data leaks and medical hallucinations.
    OpenClaw Info Stealer: A variant of the Vidar info-stealer is targeting the OpenClaw ecosystem. The attack aims to exfiltrate configuration files and gateway authentication tokens.
    OpenClaw Founder Joins OpenAI: Peter Steinberger, the creator of the OpenClaw framework, has joined OpenAI. The OpenClaw project will transition to an open-source foundation supported by OpenAI.
    Claude's Geopolitical Role: Reports indicate that Anthropic's Claude was utilized via the Palantir platform during a US military raid in Venezuela. This raid led to the capture of Nicolas Maduro.
    ASIS AI Safety Report 2026: The International AI Safety Report highlights three emerging risks. These include the lowered barrier for biological weapons, the surge in deepfakes and fraud, and the difficulty of safety research.
    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    Episode Links
    https://www.microsoft.com/en-us/security/blog/2026/02/09/prompt-attack-breaks-llm-safety/
    https://www.bbc.com/news/articles/cy4wnw04e8wo
    https://www.cpapracticeadvisor.com/2026/02/09/study-ai-powered-cyber-attacks-hit-88-of-legacy-email-security-systems/177694/
    https://cyberscoop.com/ai-healthcare-apps-hipaa-privacy-risks-openai-anthropic/
    https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html
    https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/
    https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid
    https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/2026-international-safety-report/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 12th February 2026

    12/02/2026 | 12 mins.
    In this episode of This Week in AI Security, Jeremy covers a concise but critical set of stories for the week of February 12, 2026. From physical world prompt injections targeting autonomous vehicles to massive data leaks in consumer AI wrappers, the intersection of AI and infrastructure remains the primary battleground.
    Key Stories & Developments:
    Prompt Injecting Autonomous Vehicles: Researchers at UCSC and Johns Hopkins have demonstrated that autonomous cars and drones can be compromised by "visual" prompt injections placed on physical signs, causing them to ignore traffic rules or misinterpret their surroundings.
    Massive Chat App Leak: The "Chat & Ask AI" wrapper application exposed 300 million messages belonging to 25 million users due to a simple Firebase misconfiguration that allowed unauthenticated access to read, modify, and delete data.
    Docker AI Metadata Attacks: A new vulnerability in Docker's AI assistant allows attackers to trigger exploits by planting malicious instructions within container image metadata.
    Claude Opus 4.6 vs. Security: Anthropic's latest model, Claude Opus 4.6, has demonstrated a frightening new capability: finding high-severity vulnerabilities and logic bugs via reasoning (rather than fuzzing) without needing specialized prompting or scaffolding.

    Worried about OpenClaw on your network?
    The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach.
    Scan Your Network for Shadow Agents Now
    https://www.firetail.ai/schedule-your-demo

    Episode Links
    https://www.theregister.com/2026/01/30/road_sign_hijack_ai/
    https://www.malwarebytes.com/blog/news/2026/02/ai-chat-app-leak-exposes-300-million-messages-tied-to-25-million-users
    https://www.govinfosecurity.com/docker-ai-bug-lets-image-metadata-trigger-attacks-a-30709
    https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-hunting
    https://red.anthropic.com/2026/zero-days/

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, The Ramsey Show and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family

Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/11/2026 - 6:42:53 AM