Powered by RND
PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

Available Episodes

5 of 81
  • This Week in AI Security - 27th November 2025
    In this week's episode, Jeremy covers seven stories that highlight the continuing pattern of API-level risks, the rise of multi-agent threats, and new academic insights into LLM fundamentals.Key stories include:RCE via PyTorch: A high-severity vulnerability (with an assigned CVE) was discovered in the widely-used PyTorch package, enabling Remote Code Execution (RCE) through malicious payloads at the API layer. This reinforces the trend of the API being the primary attack surface for AI applications.AI Browser Local Command Execution: Researchers found an API flaw in AI browsers that allowed a malicious instruction set to execute local commands on a user's machine via an embedded extension.Klein Bot Vulnerabilities: An open-source coding agent was found to have multiple security flaws, including the exfiltration of API keys and the disclosure of its underlying model (Grok), validating OWASp's risk categories.Multi-Agent Risk in ServiceNow: Researchers demonstrated that in ServiceNow’s new A-to-A agentic workflows, default configurations place agents in the same network, allowing them to communicate and be exploited using the privileges of the human user who created them.The "Subspace Problem" of Red Teaming: Academic research argues that current LLM red teaming methods are flawed because they test human language, not the numerical token strings the LLM actually processes, meaning predictable token-level vulnerabilities remain hidden.AI Evaluation Shift: A paper argues that non-deterministic LLM environments require a shift away from binary "yes/no" security checks (like traditional network security) toward scenario-based testing for better risk evaluation.Positive ROI of AI in Security: A Google paper provides positive data for early movers, showing that AI can triage at least 50% of security incidents, leading to reduced human workloads and faster response times, providing a strong case for simple, prompt-based AI improvements in security operations.------Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
    --------  
    13:59
  • Adam Pilton of Heimdal
    In this episode of Modern Cyber, Jeremy is joined by Adam Pilton, a cybersecurity expert with a background of 15 years in law enforcement, where his final role was as a Detective Sergeant leading the Covert Operations and Cybercrimes team. Drawing on his unique experience investigating and prosecuting hundreds of offenders, Adam provides a frontline perspective on the current state of cybercrime, noting that cybercriminals are "getting better and stronger" while individuals and businesses are "not keeping up".The conversation focuses on the human and organizational challenges in cybersecurity, stressing that small businesses should abandon the belief that they are too small to be targeted, as attackers "hit small businesses all day long" for incremental profit. Adam discusses the severe practical impacts of attacks, warning that businesses must "expect downtime" and be prepared for the significant time needed for recovery. He advocates for storytelling and analogies (like the comparison of hacking to a burglary) over technical regulations to build a strong security culture.Adam also shares insights from his post-law enforcement work as an auditor and consultant, highlighting the common organizational "motivation problem" where people acknowledge the risk but delay action, comparing it to perpetually starting a diet "tomorrow". Finally, he addresses the breakdown of trust in the age of deepfakes (citing the Irish election example) and the critical need for continuous tabletop exercises to test communication and expose "little gaps" before a crisis hits.Guest Bio – Adam Pilton With a background of 15 years in law enforcement, Adam's final role was as a Detective Sergeant leading the Covert operations and Cyber Crime teams. Since then, Adam has worked in cyber security since 2016 across various roles and has a broad understanding of cyber security, from the impact of cyber crime upon individuals and businesses to the need to convey the right messages to senior leaders and end users, ensuring engagement and support.As a subject matter expert in multiple areas for a large organisation, Adam has investigated and supervised hundreds of cases, identifying and prosecuting offenders. He has introduced digital tactics into overt and covert investigations, developing digital capabilities. Adam also held responsibility for training, utilising his communication skills to simplify the complex.Adam has worked with multi-national businesses developing their people and processes to improve their cyber security maturity.Episode Links https://heimdalsecurity.com/https://www.linkedin.com/in/adampilton/
    --------  
    38:06
  • This Week in AI Security - 20th November 2025
    In this week's episode, Jeremy covers two major and critical developments that underscore the need to harden the foundational components of AI systems and recognize the reality of AI-orchestrated attacks.First, we analyze Shadow MQ, a vulnerability discovered by Oligo that affects multiple popular AI tools, including those from Nvidia and Meta Llama. The flaw stems from the mass reuse of core, insecure components—specifically, an unsafe Python pickle deserialization technique—in the underlying plumbing of various LLMs. This vulnerability allows attackers to inject malicious commands, potentially leading to Remote Code Execution (RCE) and Privilege Escalation at the API layer.Second, we dive deep into the first publicly confirmed, AI-orchestrated cyber espionage campaign, detailed in a threat intelligence report from Anthropic. The state-sponsored campaign used a frontier AI model to accelerate nearly every phase of the attack, including:Weaponized System Prompts: Attackers defined a persona ("senior cyber operations specialist") to guide the LLM's malicious behavior.AI-Driven Evasion: The AI was used to refine malware and bypass EDR solutions.AI-Powered Reconnaissance: The model performed vulnerability research on obscure protocols and orchestrated lateral movement within networks.Jeremy emphasizes that this report is a wake-up call, validating the core risks around AI adoption and proving that malicious AI usage is now a real-world reality.Episode Links:https://www.oligo.security/blog/shadowmq-how-code-reuse-spread-critical-vulnerabilities-across-the-ai-ecosystemhttps://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf------Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
    --------  
    11:10
  • Ben Wilcox of ProArch
    In this episode of Modern Cyber, Jeremy is joined by Ben Wilcox, the unique combination of CTO and CISO at ProArch, to discuss navigating the critical intersection of speed, risk, and security in the era of AI. Ben shares his perspective as a long-time practitioner in the Microsoft ecosystem, emphasizing that the security stack must evolve with each major technology shift—from on-prem to cloud to AI.The conversation focuses on how to help customers achieve "data readiness" for AI adoption, particularly stressing that organizational discipline (like good compliance) is the fastest path to realizing AI's ROI. Ben reveals that the biggest concern he hears from enterprise customers is not LLM hallucinations or bias, but the risk of a major data breach via new AI services. He explains how ProArch leverages the comprehensive Microsoft security platform to provide centralized security and identity control across data, devices, and AI agents, ensuring that user access and data governance (Purview) trickle down through the entire stack.Finally, Ben discusses the inherent friction of his dual CISO/CTO role, explaining his philosophy of balancing rapid feature deployment with risk management by defining a secure "MVP" baseline and incrementally layering on controls as product maturity and risk increase.About Ben WilcoxBen Wilcox is the Chief Technology Officer and Chief Information Security Officer at ProArch, where he leads global strategy for cloud modernization, cybersecurity, and AI enablement. With over two decades of experience architecting secure digital transformations, Ben helps enterprises innovate responsibly while maintaining compliance and resilience. He’s recently guided Fortune 500 clients through AI adoption and zero-trust initiatives, ensuring that security evolves in step with rapid technological change.Episode Linkshttps://www.proarch.com/https://www.linkedin.com/in/ben-wilcox/https://ignite.microsoft.com/en-US/home
    --------  
    39:24
  • This Week in AI Security - 13th November 2025
    In this week's episode, Jeremy covers seven significant stories and academic findings that reveal the escalating risks and new attack methods targeting Large Language Models (LLMs) and the broader AI ecosystem.Key stories include:Prompt Flux Malware: Google Threat Intelligence Group (GTAG) discovered a new malware family called Prompt Flux that uses the Google Gemini API to continuously rewrite and modify its own behavior to evade detection—a major evolution in malware capabilities.ChatGPT Leak: User interactions and conversations with ChatGPT have been observed leaking into Google Analytics and the Google Search Console on third-party websites, potentially exposing the context of user queries.Traffic Analysis Leaks: New research demonstrates that observers can deduce the topics of a conversation in an LLM chatbot with high accuracy simply by analyzing the size and frequency of encrypted network packets (token volume), even without decrypting the data.Secret Sprawl: An analysis by Wiz found that several of the world's largest AI companies are leaking secrets and credentials in their public GitHub repositories, underscoring that the speed of AI development is leading to basic, repeatable security mistakes.Non-Deterministic LLMs: Research from Anthropic highlights that LLMs are non-deterministic and highly unreliable in describing their own internal reasoning processes, giving inconsistent responses even to minor prompt variations.The New AI VSS: The OWASp Foundation unveiled the AI Vulnerability Scoring System (AI VSS), a new framework to consistently classify and quantify the severity (on a 0-10 scale) of risks like prompt injection in LLMs, helping organizations make better risk-informed decisions.Episode Links:https://cybersecuritynews.com/promptflux-malware-using-gemini-api/https://thehackernews.com/2025/11/microsoft-uncovers-whisper-leak-attack.html https://arstechnica.com/ai/2025/11/llms-show-a-highly-unreliable-capacity-to-describe-their-own-internal-processes/ https://futurism.com/artificial-intelligence/llm-robot-vacuum-existential-crisis https://www.scworld.com/resource/owasp-global-appsec-new-ai-vulnerability-scoring-system-unveiled https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/ https://www.securityweek.com/many-forbes-ai-50-companies-leak-secrets-on-github/
    --------  
    15:38

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, Where's My Money? and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family

Social
v8.0.4 | © 2007-2025 radio.de GmbH
Generated: 11/28/2025 - 10:03:20 PM