Powered by RND
PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

Available Episodes

5 of 75
  • This Week in AI Security - 30th October 2025
    In this week's episode, Jeremy focuses on two rapidly evolving areas of AI security: the APIs that empower AI services and the risks emerging from new AI Browsers.We analyze two stories highlighting the exposure of secrets and sensitive data:API Insecurity: A path traversal vulnerability was discovered in the APIs powering an MCP server hosting service, leading to the exposure of 3,000 API keys. This reinforces the lesson that foundational security mistakes, such as inadequate secret management and unpatched vulnerabilities, are being repeated in the rush to launch new AI services.CVE in Google Cloud Vertex AI: We discuss a confirmed CVE in Google's Vertex AI service APIs. This vulnerability briefly allowed requests made by one customer's application to be routed and responded to another customer's account, risking exposure of sensitive corporate data and intellectual property in a multi-tenant SaaS environment.Finally, we explore the risks of AI Browsers (like the ChatGPT Atlas or Perplexity Comet browser) and AI Sidebars. These agents, designed to act with agency on a user's behalf (e.g., price comparison), are vulnerable to techniques that can reveal sensitive PII and user credentials to malicious websites, or unwittingly download malware.Episode Linkshttps://blog.gitguardian.com/breaking-mcp-server-hosting/https://cloud.google.com/support/bulletins#gcp-2025-059https://fortune.com/2025/10/23/cybersecurity-vulnerabilities-openai-chatgpt-atlas-ai-browser-leak-user-data-malware-prompt-injection/https://securityboulevard.com/2025/10/news-alert-squarex-reveals-new-browser-threat-ai-sidebars-cloned-to-exploit-user-trust/https://techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/____________Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of FireTail's AI Security & Governance Platform
    --------  
    10:53
  • This Week in AI Security - 23rd October 2025
    In this week's episode, recorded live from the inaugural AI Security Summit hosted by Snyk, Jeremy reports on the latest threats and strategic discussions shaping the industry. Covering multiple instances of "old risks" reappearing in new AI contexts...The Salesforce "forced leak" vulnerability, where an AI agent was exposed to malicious prompt injection via seemingly innocuous text fields on web forms (a failure of input sanitization).Research from Nvidia detailing waterhole attacks where malicious code (e.g., PowerShell) is hidden in decoy libraries (like "react-debug") that AI coding assistants might suggest to developers.A consumer AI girlfriend app that exposed customer chat data by storing conversations in an open Apache Kafka pipeline, demonstrating a basic failure of security hygiene under the pressure of rapid AI development.The "Glass Worm" campaign, where invisible Unicode control characters (similar to Ascii Smuggling research by Firetail) were used to embed malware in a VS Code plugin, proving the invisible code risk is actively being leveraged in development tools.Finally, Jeremy shares strategic insights from the summit, including the massive projected growth of the AI market (approaching the size of cloud computing), the urgency of data readiness and governance to prevent model poisoning, and the futurist perspective that AI's accelerated skill acquisition (potentially surpassing humans in certain tasks in an 18-month cycle) will require human workers to constantly upskill and change roles more frequently.Episode Linkshttps://noma.security/blog/forcedleak-agent-risks-exposed-in-salesforce-agentforce/https://www.koi.ai/blog/glassworm-first-self-propagating-worm-using-invisible-code-hits-openvsx-marketplacehttps://developer.nvidia.com/blog/from-assistant-to-adversary-exploiting-agentic-ai-developer-tools/https://www.foxnews.com/tech/ai-girlfriend-apps-leak-millions-private-chatshttps://layerxsecurity.com/blog/cometjacking-how-one-click-can-turn-perplexitys-comet-ai-browser-against-you/Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform
    --------  
    18:32
  • Chris Farris of fwd:cloudsec
    In this special in-person episode of Modern Cyber, recorded at fwd:cloudsec Europe, Jeremy is joined by cloud security expert and conference organizer Chris Farris. Drawing on his over 30 years in IT, Chris recounts his journey into cloud security, from his early days with Linux to moving video archives to AWS S3. The conversation revisits the foundational mindset shifts that occurred with the rise of the cloud, focusing on the agility it brought and the security gaps it created, such as the transition from rigid, on-premises governance to the chaotic freedom of API calls and ClickOps.The core of the episode explores the concept of the Sovereign Cloud, specifically Amazon's intended European Sovereign Cloud. Chris clarifies that simple data residency is not true sovereignty due to the US Cloud Act. He details the unique nature of the European partition—a completely separate partition, billing system, and support staff operated only by EU citizens—and identifies the primary flaw: the lack of a legal statute protecting the European employees from being compelled to act under the Cloud Act. Finally, Chris shares a powerful reflection on the fwd:cloudsec community, calling it a "second cloud family".Guest BioChris Farris is a highly experienced IT professional with a career spanning over 25 years. During this time, he has focused on various areas, including Linux, networking, and security. For the past eight years, he has been deeply involved in public-cloud and public-cloud security in media and entertainment, leveraging his expertise to build and evolve multiple cloud security programs.Chris is passionate about enabling the broader security team’s objectives of secure design, incident response, and vulnerability management. He has developed cloud security standards and baselines to provide risk-based guidance to development and operations teams. As a practitioner, he has architected and implemented numerous serverless and traditional cloud applications, focusing on deployment, security, operations, and financial modeling.He is one of the organizers of the fwd:cloudsec conference and presented at various AWS conferences and BSides events. He was named one of the inaugural AWS Security Heroes. Chris shares his insights on security and technology on social media platforms like BlueSky, Mastodon and his website chrisfarris.com.Episode Links‍https://fwdcloudsec.orghttps://fwdcloudsec.org/forum/https://www.chrisfarris.comDiscover all of your Shadow AI nowWorried about AI security? Get Complete AI Visibility in 15 Minutes. Book a demo of Firetail's AI Security & Governance Platform.
    --------  
    50:38
  • This Week in AI Security - 16 October 2025
    In this week's episode of This Week in AI Security, Jeremy covers four key developments shaping the AI security landscape.Jeremy begins by analyzing a GitHub Copilot flaw that exposed an LLM vulnerability similar to the one Jeremy disclosed last week. Researchers were able to use a hidden code comment feature to smuggle malicious prompts into the LLM, allowing them to potentially exfiltrate secrets and source code from private repositories. This highlights a growing risk in how LLMs process different input formats.Next, we discuss a fascinating research paper demonstrating the effectiveness of data poisoning. The study found that corrupting a model's behavior was possible with as few as 250 malicious documents—even in models with large training sets. By embedding a malicious command that mimicked sudo, researchers could implement a backdoor that sends data out, proving that the Attack Success Rate (ASR) is a critical metric for this real-world threat.We then examine a story at the intersection of agentic AI and supply chain risk, where untrusted actors exploited vulnerabilities in AI development plugins. By intercepting system prompts that lacked proper encryption, an attacker could discover the agent's permissions and potentially exfiltrate sensitive data, including Windows NTLM credentials.Finally, we look at the latest State of AI report, which provides further confirmation that LLMs like Claude are being used by malicious actors—specifically suspected North Korean state actors—to "vibe hack" the hiring process. By using AI to create perfect-looking resumes and tailored interview responses, the traditional method of spotting phony candidates by poor text quality is no longer reliable.Episode Links:https://www.securityweek.com/github-copilot-chat-flaw-leaked-data-from-private-repositories/https://www.anthropic.com/research/small-samples-poisonhttps://versprite.com/blog/watch-who-you-open-your-door-to-in-ai-times/https://excitech.substack.com/p/16-highlights-from-the-state-of-aihttps://www.stateof.ai/https://www.firetail.ai/blog/we-interviewed-north-korean-hacker-heres-what-learnedDiscover all of your Shadow AI now...Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform.
    --------  
    9:06
  • This Week in AI Security - 9th Oct 2025
    In this very first episode of 'This Week in AI Security', brought to you by the Firetail team, Jeremy dives into three crucial stories from the past week that highlight the rapidly evolving security landscape of AI adoption. We start with a classic error: a contractor for the Australian State of New South Wales repeated the "open S3 bucket" mistake by uploading a sensitive data set to a generative AI platform, confirming that old security missteps are resurfacing with new technology. Next, we look at a win for the defense: how Microsoft's AI analysis tools blocked a sophisticated phishing campaign that used AI-generated malicious code embedded in an SVG file and was sent from a compromised small business—a clear proof that AI can be very useful on the defensive side. Finally, we discuss recent research from the Firetail team uncovering an ASCII Smuggling vulnerability in Google Gemini, Grok, and other LLMs. This technique uses hidden characters to smuggle malicious instructions into benign-looking prompts (e.g., in emails or calendar invites). We detail the surprising dismissal of this finding by Google, which highlights the urgent need to address common, yet serious, social engineering risks in the new age of LLMs. Show links: https://databreaches.net/2025/10/06/nsw-gov-contractor-uploaded-excel-spreadsheet-of-flood-victims-data-to-chatgpt/ https://www.infosecurity-magazine.com/news/ai-generated-code-phishing/ https://www.firetail.ai/blog/ghosts-in-the-machine-ascii-smuggling-across-various-llms https://thehackernews.com/2025/09/researchers-disclose-google-gemini-ai.html ________ Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
    --------  
    8:11

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, Take on Tomorrow and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family

Social
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/4/2025 - 10:46:47 PM