Powered by RND
PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

Available Episodes

5 of 78
  • Ben Wilcox of ProArch
    In this episode of Modern Cyber, Jeremy is joined by Ben Wilcox, the unique combination of CTO and CISO at ProArch, to discuss navigating the critical intersection of speed, risk, and security in the era of AI. Ben shares his perspective as a long-time practitioner in the Microsoft ecosystem, emphasizing that the security stack must evolve with each major technology shift—from on-prem to cloud to AI.The conversation focuses on how to help customers achieve "data readiness" for AI adoption, particularly stressing that organizational discipline (like good compliance) is the fastest path to realizing AI's ROI. Ben reveals that the biggest concern he hears from enterprise customers is not LLM hallucinations or bias, but the risk of a major data breach via new AI services. He explains how ProArch leverages the comprehensive Microsoft security platform to provide centralized security and identity control across data, devices, and AI agents, ensuring that user access and data governance (Purview) trickle down through the entire stack.Finally, Ben discusses the inherent friction of his dual CISO/CTO role, explaining his philosophy of balancing rapid feature deployment with risk management by defining a secure "MVP" baseline and incrementally layering on controls as product maturity and risk increase.About Ben WilcoxBen Wilcox is the Chief Technology Officer and Chief Information Security Officer at ProArch, where he leads global strategy for cloud modernization, cybersecurity, and AI enablement. With over two decades of experience architecting secure digital transformations, Ben helps enterprises innovate responsibly while maintaining compliance and resilience. He’s recently guided Fortune 500 clients through AI adoption and zero-trust initiatives, ensuring that security evolves in step with rapid technological change.Episode Linkshttps://www.proarch.com/https://www.linkedin.com/in/ben-wilcox/https://ignite.microsoft.com/en-US/home
    --------  
    39:24
  • This Week in AI Security - 13th November 2025
    In this week's episode, Jeremy covers seven significant stories and academic findings that reveal the escalating risks and new attack methods targeting Large Language Models (LLMs) and the broader AI ecosystem.Key stories include:Prompt Flux Malware: Google Threat Intelligence Group (GTAG) discovered a new malware family called Prompt Flux that uses the Google Gemini API to continuously rewrite and modify its own behavior to evade detection—a major evolution in malware capabilities.ChatGPT Leak: User interactions and conversations with ChatGPT have been observed leaking into Google Analytics and the Google Search Console on third-party websites, potentially exposing the context of user queries.Traffic Analysis Leaks: New research demonstrates that observers can deduce the topics of a conversation in an LLM chatbot with high accuracy simply by analyzing the size and frequency of encrypted network packets (token volume), even without decrypting the data.Secret Sprawl: An analysis by Wiz found that several of the world's largest AI companies are leaking secrets and credentials in their public GitHub repositories, underscoring that the speed of AI development is leading to basic, repeatable security mistakes.Non-Deterministic LLMs: Research from Anthropic highlights that LLMs are non-deterministic and highly unreliable in describing their own internal reasoning processes, giving inconsistent responses even to minor prompt variations.The New AI VSS: The OWASp Foundation unveiled the AI Vulnerability Scoring System (AI VSS), a new framework to consistently classify and quantify the severity (on a 0-10 scale) of risks like prompt injection in LLMs, helping organizations make better risk-informed decisions.Episode Links:https://cybersecuritynews.com/promptflux-malware-using-gemini-api/https://thehackernews.com/2025/11/microsoft-uncovers-whisper-leak-attack.html https://arstechnica.com/ai/2025/11/llms-show-a-highly-unreliable-capacity-to-describe-their-own-internal-processes/ https://futurism.com/artificial-intelligence/llm-robot-vacuum-existential-crisis https://www.scworld.com/resource/owasp-global-appsec-new-ai-vulnerability-scoring-system-unveiled https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/ https://www.securityweek.com/many-forbes-ai-50-companies-leak-secrets-on-github/
    --------  
    15:38
  • This Week in AI Security - 6th November 2025
    In this week's episode, Jeremy looks at three compelling stories and a significant academic paper that illustrate the accelerating convergence of AI, APIs, and network security.API Exposure in AI Services: We discuss a path traversal vulnerability that led to the discovery of 3,000 API keys in a managed AI hosting service, underscoring that the API remains the exposed attack surface where data exfiltration occurs.AI Code Agent Traffic Analysis: Drawing on research from Chaser Systems, Jeremy breaks down the network traffic from popular AI coding agents (like Copilot and Cursor). The analysis reveals that sensitive data, including previous conversation context and PII, is repeatedly packaged and resent with every subsequent request, making detection and leakage risk significantly higher.LLM-Powered Malware: We cover a groundbreaking discovery by the Microsoft Incident Response Team (DART): malware using the OpenAI Assistants API as its Command and Control (C2) server. This new category of malware replaces traditional hard-coded instructions with an LLM-driven "brain," giving it the potential to coordinate malicious activity with context, creativity, and adaptability.The Guardrail Fallacy: Finally, Jeremy discusses an academic paper showing that strong, adaptive attacks can bypass LLM defenses against Jailbreaks and Prompt Injections with an Attack Success Rate (ASR) of over 90%. The research argues that simple guardrails provide organizations with a dangerous false sense of security.Episode Linkshttps://chasersystems.com/blog/what-data-do-coding-agents-send-and-where-to/https://embracethered.com/blog/posts/2025/claude-abusing-network-access-and-anthropic-api-for-data-exfiltration/ https://arxiv.org/pdf/2510.09023https://www.microsoft.com/en-us/security/blog/2025/11/03/sesameop-novel-backdoor-uses-openai-assistants-api-for-command-and-control/------Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
    --------  
    15:49
  • This Week in AI Security - 30th October 2025
    In this week's episode, Jeremy focuses on two rapidly evolving areas of AI security: the APIs that empower AI services and the risks emerging from new AI Browsers.We analyze two stories highlighting the exposure of secrets and sensitive data:API Insecurity: A path traversal vulnerability was discovered in the APIs powering an MCP server hosting service, leading to the exposure of 3,000 API keys. This reinforces the lesson that foundational security mistakes, such as inadequate secret management and unpatched vulnerabilities, are being repeated in the rush to launch new AI services.CVE in Google Cloud Vertex AI: We discuss a confirmed CVE in Google's Vertex AI service APIs. This vulnerability briefly allowed requests made by one customer's application to be routed and responded to another customer's account, risking exposure of sensitive corporate data and intellectual property in a multi-tenant SaaS environment.Finally, we explore the risks of AI Browsers (like the ChatGPT Atlas or Perplexity Comet browser) and AI Sidebars. These agents, designed to act with agency on a user's behalf (e.g., price comparison), are vulnerable to techniques that can reveal sensitive PII and user credentials to malicious websites, or unwittingly download malware.Episode Linkshttps://blog.gitguardian.com/breaking-mcp-server-hosting/https://cloud.google.com/support/bulletins#gcp-2025-059https://fortune.com/2025/10/23/cybersecurity-vulnerabilities-openai-chatgpt-atlas-ai-browser-leak-user-data-malware-prompt-injection/https://securityboulevard.com/2025/10/news-alert-squarex-reveals-new-browser-threat-ai-sidebars-cloned-to-exploit-user-trust/https://techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/____________Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of FireTail's AI Security & Governance Platform
    --------  
    10:53
  • This Week in AI Security - 23rd October 2025
    In this week's episode, recorded live from the inaugural AI Security Summit hosted by Snyk, Jeremy reports on the latest threats and strategic discussions shaping the industry. Covering multiple instances of "old risks" reappearing in new AI contexts...The Salesforce "forced leak" vulnerability, where an AI agent was exposed to malicious prompt injection via seemingly innocuous text fields on web forms (a failure of input sanitization).Research from Nvidia detailing waterhole attacks where malicious code (e.g., PowerShell) is hidden in decoy libraries (like "react-debug") that AI coding assistants might suggest to developers.A consumer AI girlfriend app that exposed customer chat data by storing conversations in an open Apache Kafka pipeline, demonstrating a basic failure of security hygiene under the pressure of rapid AI development.The "Glass Worm" campaign, where invisible Unicode control characters (similar to Ascii Smuggling research by Firetail) were used to embed malware in a VS Code plugin, proving the invisible code risk is actively being leveraged in development tools.Finally, Jeremy shares strategic insights from the summit, including the massive projected growth of the AI market (approaching the size of cloud computing), the urgency of data readiness and governance to prevent model poisoning, and the futurist perspective that AI's accelerated skill acquisition (potentially surpassing humans in certain tasks in an 18-month cycle) will require human workers to constantly upskill and change roles more frequently.Episode Linkshttps://noma.security/blog/forcedleak-agent-risks-exposed-in-salesforce-agentforce/https://www.koi.ai/blog/glassworm-first-self-propagating-worm-using-invisible-code-hits-openvsx-marketplacehttps://developer.nvidia.com/blog/from-assistant-to-adversary-exploiting-agentic-ai-developer-tools/https://www.foxnews.com/tech/ai-girlfriend-apps-leak-millions-private-chatshttps://layerxsecurity.com/blog/cometjacking-how-one-click-can-turn-perplexitys-comet-ai-browser-against-you/Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform
    --------  
    18:32

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, Take on Tomorrow and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family

Social
v7.23.12 | © 2007-2025 radio.de GmbH
Generated: 11/18/2025 - 10:31:39 PM