PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

93 episodes

  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 19th February 2026

    19/02/2026 | 12 mins.
    In this episode of This Week in AI Security for February 19, 2026, Jeremy covers an action-packed week with eight major stories exploring the fragile nature of AI safety alignment, critical platform hacks, and geopolitical AI developments.
    Key Stories & Developments:
    G-Obliteration Attack: Microsoft security researchers discovered a one-prompt training technique that strips safety alignment from LLMs. By leveraging Group Relative Policy Optimization (GRPO), attackers can use a single mild prompt to cause cross-category generalization of harm. This effectively removes guardrails across 15 open-source models while preserving their utility.
    Orchids Vibe-Coding Hack: A BBC reporter was hacked on Orchids, a popular "vibe-coding" platform. A security researcher demonstrated a malicious code injection that compromised the user's development environment.
    AI vs. Legacy Email Security: AI-powered cyberattacks are successfully bypassing 88% of legacy email security systems. Attackers are utilizing LLMs to generate highly authentic phishing and impersonation content at scale.
    AI Doctors Evade Privacy Rules: AI-powered health services are not subject to the same strict privacy regulations as traditional healthcare facilities. This raises concerns around data leaks and medical hallucinations.
    OpenClaw Info Stealer: A variant of the Vidar info-stealer is targeting the OpenClaw ecosystem. The attack aims to exfiltrate configuration files and gateway authentication tokens.
    OpenClaw Founder Joins OpenAI: Peter Steinberger, the creator of the OpenClaw framework, has joined OpenAI. The OpenClaw project will transition to an open-source foundation supported by OpenAI.
    Claude's Geopolitical Role: Reports indicate that Anthropic's Claude was utilized via the Palantir platform during a US military raid in Venezuela. This raid led to the capture of Nicolas Maduro.
    ASIS AI Safety Report 2026: The International AI Safety Report highlights three emerging risks. These include the lowered barrier for biological weapons, the surge in deepfakes and fraud, and the difficulty of safety research.
    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

    Episode Links
    https://www.microsoft.com/en-us/security/blog/2026/02/09/prompt-attack-breaks-llm-safety/
    https://www.bbc.com/news/articles/cy4wnw04e8wo
    https://www.cpapracticeadvisor.com/2026/02/09/study-ai-powered-cyber-attacks-hit-88-of-legacy-email-security-systems/177694/
    https://cyberscoop.com/ai-healthcare-apps-hipaa-privacy-risks-openai-anthropic/
    https://thehackernews.com/2026/02/infostealer-steals-openclaw-ai-agent.html
    https://techcrunch.com/2026/02/15/openclaw-creator-peter-steinberger-joins-openai/
    https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid
    https://www.asisonline.org/security-management-magazine/latest-news/today-in-security/2026/february/2026-international-safety-report/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 12th February 2026

    12/02/2026 | 12 mins.
    In this episode of This Week in AI Security, Jeremy covers a concise but critical set of stories for the week of February 12, 2026. From physical world prompt injections targeting autonomous vehicles to massive data leaks in consumer AI wrappers, the intersection of AI and infrastructure remains the primary battleground.
    Key Stories & Developments:
    Prompt Injecting Autonomous Vehicles: Researchers at UCSC and Johns Hopkins have demonstrated that autonomous cars and drones can be compromised by "visual" prompt injections placed on physical signs, causing them to ignore traffic rules or misinterpret their surroundings.
    Massive Chat App Leak: The "Chat & Ask AI" wrapper application exposed 300 million messages belonging to 25 million users due to a simple Firebase misconfiguration that allowed unauthenticated access to read, modify, and delete data.
    Docker AI Metadata Attacks: A new vulnerability in Docker's AI assistant allows attackers to trigger exploits by planting malicious instructions within container image metadata.
    Claude Opus 4.6 vs. Security: Anthropic's latest model, Claude Opus 4.6, has demonstrated a frightening new capability: finding high-severity vulnerabilities and logic bugs via reasoning (rather than fuzzing) without needing specialized prompting or scaffolding.

    Worried about OpenClaw on your network?
    The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach.
    Scan Your Network for Shadow Agents Now
    https://www.firetail.ai/schedule-your-demo

    Episode Links
    https://www.theregister.com/2026/01/30/road_sign_hijack_ai/
    https://www.malwarebytes.com/blog/news/2026/02/ai-chat-app-leak-exposes-300-million-messages-tied-to-25-million-users
    https://www.govinfosecurity.com/docker-ai-bug-lets-image-metadata-trigger-attacks-a-30709
    https://www.axios.com/2026/02/05/anthropic-claude-opus-46-software-hunting
    https://red.anthropic.com/2026/zero-days/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 5th February 2026

    05/02/2026 | 12 mins.
    In this first episode of February 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent.
    Key Stories & Developments:
    Operation Bizarre Bazaar: Threat actors are actively targeting exposed LLM infrastructure to steal computing resources for cryptocurrency mining and resell API access on dark markets, attempting to pivot into internal systems via compromised MCP servers.
    Gemini MCP Tool Exploit: A critical Remote Code Execution (RCE) vulnerability was identified in a Gemini Model Context Protocol (MCP) tool, highlighting the recurring theme that the infrastructure powering LLMs remains a primary weak point.
    MoltBook API Leak: Researchers discovered a hardcoded Supabase API key in "MoltBook," a social network for AI agents. This flaw granted unauthenticated access to the entire production database, exposing over 1.5 million API keys.
    Bondu AI Toy Breach: A privacy failure in an AI-powered dinosaur toy left 50,000 chat log records exposed to anyone with a Gmail account, underscoring the lack of robust authentication in consumer AI IoT devices.
    CISA Chief's Data Mishandling: Reports surfaced that the acting head of the country's cyber defense agency uploaded sensitive "official use only" documents into a public version of ChatGPT, bypassing enterprise controls and security protocols.

    Worried about OpenClaw on your network?
    The OpenClaw crisis proved that employees are deploying unvetted AI agents on their local machines. FireTail helps you discover and govern Shadow AI before it becomes a breach.
    Scan Your Network for Shadow Agents Now
    https://www.firetail.ai/schedule-your-demo

    Episode Links
    https://www.bleepingcomputer.com/news/security/hackers-hijack-exposed-llm-endpoints-in-bizarre-bazaar-operation/
    https://darkwebinformer.com/cve-2026-0755-reported-zero-day-in-gemini-mcp-tool-could-allow-remote-code-execution/
    https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
    https://ai.plainenglish.io/clawdbot-security-guide-de77b45ab719
    https://blackoutvpn.au/blog/dont-buy-internet-connected-toys
    https://www.politico.com/news/2026/01/27/cisa-madhu-gottumukkala-chatgpt-00749361
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 29th January 2026

    29/01/2026 | 23 mins.
    In this final episode of January 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent.
    Key Stories & Developments:
    Chainlit Framework Flaws: Two critical CVEs were identified in Chainlit, a popular Python package for building enterprise chatbots. These vulnerabilities, including Arbitrary File Read and Server-Side Request Forgery (SSRF), highlight the supply chain risks inherent in the rapidly growing AI development ecosystem.
    Google Gemini Workspace Exploit: Researchers demonstrated how Gemini can be manipulated via malicious calendar invites. By embedding hidden instructions (similar to Ascii or emoji smuggling), attackers can trick the AI into exfiltrating sensitive user data, such as meeting details and attachments.
    VS Code "Spyware" Plugins: Over 1.5 million developers were potentially exposed to malicious VS Code extensions impersonating ChatGPT. These plugins serve as "watering hole" attacks designed to harvest sensitive environment variables, credentials, and deployment keys.
    Vertex AI Privilege Escalation: A novel attack chain in Google’s Vertex AI was disclosed. Attackers used a malicious reverse shell in a reasoning engine function to escalate privileges via the Instance Metadata Service, gaining master access to chat sessions, storage buckets, and logs.
    The "Cloudbot" Warning: A deep dive into Cloudbot (now rebranded as ClawdBot), a general-purpose AI agent. Researchers found hundreds of instances sitting wide open on the internet, many providing full root shell access and exposing personal conversation histories and API keys.
    Episode Links
    https://www.theregister.com/2026/01/20/ai_framework_flaws_enterprise_clouds/
    https://www.securityweek.com/weaponized-invite-enabled-calendar-data-theft-via-google-gemini/
    https://cybernews.com/security/fake-chatgpt-vscode-extensions-compromised-developers/
    https://gbhackers.com/google-vertex-ai-flaw/
    https://www.insurancejournal.com/magazines/mag-features/2026/01/26/855293.htm
    https://arxiv.org/pdf/2601.10338
    https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/
    https://securityboulevard.com/2026/01/clawdbot-is-what-happens-when-ai-gets-root-access-a-security-experts-take-on-silicon-valleys-hottest-ai-agent/
    https://jpcaparas.medium.com/hundreds-of-clawdbot-instances-were-exposed-on-the-internet-heres-how-to-not-be-one-of-them-63fa813e6625
    https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers

    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
  • Modern Cyber with Jeremy Snyder

    Sydney Marrone of Nebulock

    28/01/2026 | 39 mins.
    In this episode of Modern Cyber, Jeremy is joined by Sydney Marrone, a premier expert in the field of threat hunting and the Head of Threat Hunting at Nebulock. The conversation explores the rapidly evolving intersection of threat hunting and artificial intelligence, specifically focusing on how AI agents are transforming the speed and efficacy of defensive operations.
    Sydney shares her journey from "crawling under desks" in IT to building elite threat hunting teams at major organizations like Lumen (formerly CenturyLink) and Splunk. She breaks down her newly released Agentic Threat Hunting Framework (ATHF) and the LOCK pattern (Learn, Observe, Check, Keep), explaining how AI can condense a hunt that previously took four weeks into a mere 45 minutes. They also discuss the critical need for AI governance, the risks of "ungoverned access," and why "trust but verify" remains the golden rule when integrating LLMs into security workflows.
    About Sydney Marrone
    Sydney Marrone is the Head of Threat Hunting at Nebulock and a co-founder of the THOR Collective. With over a decade of experience in incident response, forensics, and blue teaming, she has become a leading voice in structured threat hunting. Sydney is the author of the Agentic Threat Hunting Framework (ATHF) and the co-author of the PEAK Threat Hunting Framework, which won a SANS award for its contribution to the community.
    A respected author and educator, Sydney co-authored The Threat Hunter's Cookbook and is currently developing a SANS course focused on threat hunting. Her work focuses on moving organizations from reactive to proactive security postures through advanced data science, automation, and authentic AI integration.
    Episode Links
    Nebulock (AI-Powered Threat Hunting): https://nebulock.io/
    Agentic Threat Hunting Framework (ATHF): https://github.com/Nebulock-Inc/agentic-threat-hunting-framework
    THOR Collective (Substack & Community): https://dispatch.thorcollective.com/
    PEAK Threat Hunting Framework: https://www.splunk.com/en_us/blog/security/peak-threat-hunting-framework.html
    HEARTH Repository (THOR Collective): https://github.com/THORCollective/HEARTH
    Threat Hunting MCP Server: https://github.com/THORCollective/threat-hunting-mcp-server

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, The Happy Saver Podcast - Personal Finance in New Zealand and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family

Social
v8.7.0 | © 2007-2026 radio.de GmbH
Generated: 2/25/2026 - 4:16:48 AM