PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

108 episodes

  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 7th May 2026

    07/05/2026 | 14 mins.
    In this episode for May 7, 2026, Jeremy reports from the sidelines of BSides Luxembourg. This week marks a significant shift in AI-driven vulnerability research, moving from source code analysis to the successful reverse engineering of closed-source compiled binaries.

    Key Episode Highlights:
    GitHub Backend RCE: Researchers from Wiz used AI-augmented binary analysis to find an X-stat header injection vulnerability in GitHub’s Git push pipeline, achieving a CVSS score of 8.7 on closed-source code.
    The "Copyfail" Crisis: A critical Linux security flaw dating back to 2017 was uncovered using AI-assisted tools. The story highlights the tension between automated discovery and the rise of "AI slop" in automated vulnerability disclosures.
    CISA Patching Mandates: CISA is considering lowering the required "mean time to patch" from 14 days to just 3 days in response to AI’s ability to find vulnerabilities at an "apocalypse" scale.
    Shadow AI Exposure: A study by Intruder found over 1 million exposed AI services via certificate transparency logs, with 31% of Meta Llama servers requiring zero authentication.
    Google "Cosmo" Leak: A massive 1.13 GB system-level agent for Android briefly leaked on the Play Store, revealing an autonomous browser agent with deep system permissions.
    The Criminal Skill Gap: New research from the University of Edinburgh suggests that while AI is boosting professional developers, most cybercriminals currently lack the skills to weaponize AI at a "weaponizable scale".

    Shadow AI and unsecured AI models are the new frontier of enterprise risk. 31% of exposed AI servers are operating with zero authentication. Don't let your infrastructure be the next headline. Get full visibility into your AI environment in 15 minutes. Book your FireTail demo: https://www.firetail.ai/schedule-your-demo

    Episode Links
    https://www.wiz.io/blog/github-rce-vulnerability-cve-2026-3854
    https://cyberscoop.com/copy-fail-linux-vulnerability-artificial-intelligence/
    https://www.reuters.com/legal/litigation/us-officials-weigh-cutting-deadlines-fix-digital-flaws-amid-worries-over-ai-2026-05-01/
    https://venturebeat.com/security/ai-agent-runtime-security-system-card-audit-comment-and-control-2026
    https://thehackernews.com/2026/05/we-scanned-1-million-exposed-ai.html
    https://www.euronews.com/next/2026/05/05/cybercriminals-gave-ai-a-go-and-came-away-disappointed-study-finds
    https://www.bleepingcomputer.com/news/security/learning-from-the-vercel-breach-shadow-ai-and-oauth-sprawl/
    https://azat.tv/en/google-cosmo-ai-leak-privacy-safety/https://www.wiz.io/blog/github-rce-vulnerability-cve-2026-3854
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 30th April 2026

    30/04/2026 | 14 mins.
    In this episode for April 30, 2026, Jeremy breaks down a week where the "human-in-the-loop" failed spectacularly. From a production environment deleted in just nine seconds to "Abliterated" models providing kidnapping instructions to Congress, the risks of autonomous AI agents are no longer theoretical. They are live.
    Key Episode Highlights:
    Abliterated Models on Capitol Hill: OpenAI and Anthropic briefed House lawmakers on "abliterated" models - versions with safety guardrails stripped - demonstrating how they can provide step-by-step instructions for criminal acts.
    Entra ID Hijacking: Researchers at Silverfort discovered that the new "Agent ID" role in Microsoft Entra ID can be exploited to hijack service principals, leading to a full Global Admin takeover.
    The 9-Second Disaster: An AI agent at PocketOS, attempting to fix a staging environment, fetched production credentials and deleted both the production environment and its backups in under ten seconds.
    LiteLLM SQL Injection: A critical vulnerability in the LiteLLM gateway saw targeted exploitation within 36 hours of disclosure, specifically aiming for provider API keys.
    Vercel Breach Update: The recent Vercel data breach is traced back to a "Luma Stealer" malware infection at a third-party AI analytics partner.
    Episode Links
    https://www.politico.com/news/2026/04/22/ai-chatbots-jailbreak-safety-00887869
    https://security.googleblog.com/2026/04/ai-threats-in-wild-current-state-of.html
    https://www.microsoft.com/en-us/security/blog/2026/04/06/ai-enabled-device-code-phishing-campaign-april-2026/
    https://hackread.com/microsoft-entra-agent-id-flaw-tenant-takeover/
    https://www.bleepingcomputer.com/news/security/hackers-are-exploiting-a-critical-litellm-pre-auth-sqli-flaw/
    https://www.cbsnews.com/news/anthropic-investigates-mythos-ai-breach/
    https://thehackernews.com/2026/04/vercel-breach-tied-to-context-ai-hack.html
    https://x.com/lifeof_jer/status/2048103471019434248
    Is your organization part of the 82% with unknown AI agents running on your network? Don't wait for a "9-second deletion" event. Get full visibility into your AI agents today.
    Book your FireTail demo: https://www.firetail.ai/schedule-your-demo
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 23rd April 2026

    23/04/2026 | 15 mins.
    In this episode for April 23, 2026, Jeremy explores a week where "first principles" in security are being forgotten in the rush to adopt AI. From guessable API endpoints exposing Anthropic’s most powerful model to a $10,000 fine for a lawyer’s AI "slop," the message of the week is clear: There is no AI without API security.
    Key Stories & Developments:
    The Mythos API Leak: Unauthorized actors gained access to Anthropic’s Claude Mythos model by simply guessing API naming conventions. This classic case of Broken Function Level Authorization highlights a major oversight in the rollout of sensitive models.
    Shadow AI Agents: A new survey from the Cloud Security Alliance reveals that 82% of enterprises have unknown AI agents operating without security oversight.
    The $10K Hallucination: An Oregon lawyer was fined $10,000 for "AI slop" in court filings, setting a firm legal precedent that AI error does not excuse professional negligence.
    MCP Design Flaws: The Model Context Protocol (MCP), designed to wrap APIs in human language, is proving vulnerable to coercion. Attackers are using human language requests to probe back-end systems through NGINX.
    "Logjack": New research into "Logjack" shows how malicious prompts hidden in system logs can compromise the LLMs used to analyze them.
    Meta Keystroke Capturing: Reports indicate Meta is capturing employee keystrokes to refine internal AI training sets, raising massive concerns about insider risk and password exfiltration.
    Shadow AI agents are the new Shadow IT. Are you part of the 82% with zero visibility into your AI agents? Discover every agent and API connection in 15 minutes. Book your FireTail demo: https://www.firetail.ai/schedule-your-demo

    Episode Links
    https://www.inc.com/kevin-haynes/faulty-ai-leads-to-record-10000-fine-for-oregon-lawyer/91322007
    https://www.nytimes.com/2026/04/17/us/oregon-winery-ai-legal-fight.html
    https://techcrunch.com/2026/04/21/meta-will-record-employees-keystrokes-and-use-it-to-train-its-ai-models/
    https://cloudsecurityalliance.org/press-releases/2026/04/21/new-cloud-security-alliance-survey-reveals-82-of-enterprises-have-unknown-ai-agents-in-their-environments
    https://techcrunch.com/2026/04/20/app-host-vercel-confirms-security-incident-says-customer-data-was-stolen-via-breach-at-context-ai/
    https://www.securityweek.com/by-design-flaw-in-mcp-could-enable-widespread-ai-supply-chain-attacks/
    https://www.theregister.com/2026/04/16/anthropic_mcp_design_flaw/
    https://www.darkreading.com/application-security/critical-mcp-integration-flaw-nginx-risk
    https://www.helpnetsecurity.com/2026/04/16/llm-router-security-risk-agent-commands/
    https://oddguan.com/blog/comment-and-control-prompt-injection-credential-theft-claude-code-gemini-cli-github-copilot/
    https://arxiv.org/abs/2604.15368
    https://venturebeat.com/security/microsoft-salesforce-copilot-agentforce-prompt-injection-cve-agent-remediation-playbook
    https://techcrunch.com/2026/04/21/unauthorized-group-has-gained-access-to-anthropics-exclusive-cyber-tool-mythos-report-claims/
    https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jagged-frontier
    https://www.darkreading.com/vulnerabilities-threats/every-old-vulnerability-ai-vulnerability
    https://www.theregister.com/2026/04/20/lovable_denies_data_leak/
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 16th April 2026

    23/04/2026 | 14 mins.
    This week, Jeremy breaks down a sophisticated bypass of Apple Intelligence and explores a hardware-level GPU threat that turns "vandalism" into full system takeovers. We also look at the massive data fallout from the Mercor supply chain breach and why "Claude Mythos" is officially ending the era of slow vulnerability management.
    Key Stories & Developments:
    NeuralExec vs. Apple: Researchers reveal a 76% success rate in bypassing Apple Intelligence safety filters using Right-to-Left (RTL) Unicode overrides.
    The 4TB Mercor Leak: The fallout from the LiteLLM supply chain attack is confirmed: 4 terabytes of data stolen, leading Meta to pause contracts and OpenAI to investigate exposure.
    GPU-Breach: A new technique from the University of Toronto moves beyond "bit-flipping" to gain God-mode over GPU memory, threatening cryptographic secrets.
    Secret Sprawl Explosion: GitGuardian reports a 34% jump in exposed secrets, with AI service credentials (like OpenRouter and Google API keys) being the fastest-growing category.
    The Death of the Patch Cycle: "Claude Mythos" has flipped the script—99% of its AI-discovered zero-days are now valid, forcing a realization that this is no longer an AI security problem, but a high-speed vulnerability management crisis.
    Episode Links
    https://9to5mac.com/2026/04/09/researchers-detail-how-a-prompt-injection-attack-bypassed-apple-intelligence-protections/
    https://securityboulevard.com/2026/04/bypassing-llm-supervisor-agents-through-indirect-prompt-injection/
    https://cybersecurityjournal.ca/techtalk/83883-flowise-cve-2025-59528-rce-exploitation-ai-agent-builder-2026-04-08/
    https://cyberscoop.com/grafanaghost-grafana-prompt-injection-vulnerability-data-exfiltration/
    https://techcrunch.com/2026/04/09/after-data-breach-10b-valued-startup-mercor-is-having-a-month/
    https://www.helpnetsecurity.com/2026/04/14/gitguardian-ai-agents-credentials-leak/
    https://securityaffairs.com/190455/security/gpubreach-exploit-uses-gpu-memory-bit-flips-to-achieve-full-system-takeover.html
    https://aisle.com/blog/system-over-model-zero-day-discovery-at-the-jagged-frontier
    https://openai.com/index/scaling-trusted-access-for-cyber-defense/
    https://www.npr.org/2026/04/11/nx-s1-5778508/anthropic-project-glasswing-ai-cybersecurity-mythos-preview
    https://labs.cloudsecurityalliance.org/wp-content/uploads/2026/04/mythosready.pdf
    https://www.businessinsider.com/andon-market-luna-ai-agent-managed-store-san-francisco-2026-4#

    Worried about AI security?
    Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
  • Modern Cyber with Jeremy Snyder

    This Week in AI Security - 9th April 2026

    09/04/2026 | 11 mins.
    In this episode for April 9, 2026, Jeremy covers a week dominated by highly sophisticated supply chain attacks and the emergence of "Project Glasswing", an internal Anthropic project revealing that next-gen AI models may be "too good" at finding zero-day vulnerabilities.
    Key Stories & Developments:
    The FBI's IC3 Report: For the first time in 25 years, the FBI has specifically categorized AI-enabled fraud, which accounted for $893 million in losses across BEC, romance, and investment scams.
    Ollama Exposure Spikes: A Shodan scan reveals that publicly exposed Ollama instances have jumped from 1,100 in September 2025 to over 25,000 in April 2026.
    Critical Infrastructure CVEs: Both MLflow and PraisonAI received maximum CVSS scores of 10.0 for flaws allowing unauthenticated code execution and command injection.
    The Axios Supply Chain Heist: In a sophisticated "long con," threat actors (Team PCP) spent weeks building rapport with the Axios project maintainer via a fake Slack workspace. They eventually lured the maintainer into downloading malware, allowing them to inject a Remote Access Trojan (RAT) into a package installed 600,000 times.
    Project Glasswing (Claude Mythos): Leaked documents from Anthropic describe Claude Mythos, a model family with terrifying cybersecurity capabilities. Mythos discovered a 27-year-old bug predating GitHub; currently, 99% of the zero-days it has identified remain unpatched, leading to internal concerns about a controlled rollout.
    Vertex AI Permission Flaw: Unit 42 discovered a flaw in Google Cloud’s Vertex AI that could allow AI agents to bypass security boundaries and access sensitive data.
    Episode Links
    https://securityboulevard.com/2026/04/cyber-fraud-cost-americans-17-billion-in-2025-ai-scams-make-list-fbi/
    https://insecurestack.substack.com/p/eus-exposed-ai-infrastructure
    https://securityonline.info/weekly-vulnerability-digest-april-2026-chrome-zero-day-ai-security/
    https://thehackernews.com/2026/03/vertex-ai-vulnerability-exposes-google.html
    https://fortune.com/2026/04/02/mercor-ai-startup-security-incident-10-billion/
    https://www.sans.org/blog/what-we-learned-axios-npm-supply-chain-compromise-emergency-briefing
    https://techcrunch.com/2026/04/06/north-koreas-hijack-of-one-of-the-webs-most-used-open-source-projects-was-likely-weeks-in-the-making/
    https://thehackernews.com/2026/04/flowise-ai-agent-builder-under-active.html
    https://www.securityweek.com/anthropic-unveils-claude-mythos-a-cybersecurity-breakthrough-that-could-also-supercharge-attacks/
    https://www.staffingindustry.com/news/global-daily-news/mercor-reports-data-breach
    https://red.anthropic.com/2026/mythos-preview/

    Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo

More Business podcasts

About Modern Cyber with Jeremy Snyder

Looking for the latest news and views from the world of AI security?Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, The Diary Of A CEO with Steven Bartlett and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family