Powered by RND
PodcastsBusinessModern Cyber with Jeremy Snyder

Modern Cyber with Jeremy Snyder

Jeremy Snyder
Modern Cyber with Jeremy Snyder
Latest episode

Available Episodes

5 of 80
  • Adam Pilton of Heimdal
    In this episode of Modern Cyber, Jeremy is joined by Adam Pilton, a cybersecurity expert with a background of 15 years in law enforcement, where his final role was as a Detective Sergeant leading the Covert Operations and Cybercrimes team. Drawing on his unique experience investigating and prosecuting hundreds of offenders, Adam provides a frontline perspective on the current state of cybercrime, noting that cybercriminals are "getting better and stronger" while individuals and businesses are "not keeping up".The conversation focuses on the human and organizational challenges in cybersecurity, stressing that small businesses should abandon the belief that they are too small to be targeted, as attackers "hit small businesses all day long" for incremental profit. Adam discusses the severe practical impacts of attacks, warning that businesses must "expect downtime" and be prepared for the significant time needed for recovery. He advocates for storytelling and analogies (like the comparison of hacking to a burglary) over technical regulations to build a strong security culture.Adam also shares insights from his post-law enforcement work as an auditor and consultant, highlighting the common organizational "motivation problem" where people acknowledge the risk but delay action, comparing it to perpetually starting a diet "tomorrow". Finally, he addresses the breakdown of trust in the age of deepfakes (citing the Irish election example) and the critical need for continuous tabletop exercises to test communication and expose "little gaps" before a crisis hits.Guest Bio – Adam Pilton With a background of 15 years in law enforcement, Adam's final role was as a Detective Sergeant leading the Covert operations and Cyber Crime teams. Since then, Adam has worked in cyber security since 2016 across various roles and has a broad understanding of cyber security, from the impact of cyber crime upon individuals and businesses to the need to convey the right messages to senior leaders and end users, ensuring engagement and support.As a subject matter expert in multiple areas for a large organisation, Adam has investigated and supervised hundreds of cases, identifying and prosecuting offenders. He has introduced digital tactics into overt and covert investigations, developing digital capabilities. Adam also held responsibility for training, utilising his communication skills to simplify the complex.Adam has worked with multi-national businesses developing their people and processes to improve their cyber security maturity.Episode Links https://heimdalsecurity.com/https://www.linkedin.com/in/adampilton/
    --------  
    38:06
  • This Week in AI Security - 20th November 2025
    In this week's episode, Jeremy covers two major and critical developments that underscore the need to harden the foundational components of AI systems and recognize the reality of AI-orchestrated attacks.First, we analyze Shadow MQ, a vulnerability discovered by Oligo that affects multiple popular AI tools, including those from Nvidia and Meta Llama. The flaw stems from the mass reuse of core, insecure components—specifically, an unsafe Python pickle deserialization technique—in the underlying plumbing of various LLMs. This vulnerability allows attackers to inject malicious commands, potentially leading to Remote Code Execution (RCE) and Privilege Escalation at the API layer.Second, we dive deep into the first publicly confirmed, AI-orchestrated cyber espionage campaign, detailed in a threat intelligence report from Anthropic. The state-sponsored campaign used a frontier AI model to accelerate nearly every phase of the attack, including:Weaponized System Prompts: Attackers defined a persona ("senior cyber operations specialist") to guide the LLM's malicious behavior.AI-Driven Evasion: The AI was used to refine malware and bypass EDR solutions.AI-Powered Reconnaissance: The model performed vulnerability research on obscure protocols and orchestrated lateral movement within networks.Jeremy emphasizes that this report is a wake-up call, validating the core risks around AI adoption and proving that malicious AI usage is now a real-world reality.Episode Links:https://www.oligo.security/blog/shadowmq-how-code-reuse-spread-critical-vulnerabilities-across-the-ai-ecosystemhttps://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf------Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
    --------  
    11:10
  • Ben Wilcox of ProArch
    In this episode of Modern Cyber, Jeremy is joined by Ben Wilcox, the unique combination of CTO and CISO at ProArch, to discuss navigating the critical intersection of speed, risk, and security in the era of AI. Ben shares his perspective as a long-time practitioner in the Microsoft ecosystem, emphasizing that the security stack must evolve with each major technology shift—from on-prem to cloud to AI.The conversation focuses on how to help customers achieve "data readiness" for AI adoption, particularly stressing that organizational discipline (like good compliance) is the fastest path to realizing AI's ROI. Ben reveals that the biggest concern he hears from enterprise customers is not LLM hallucinations or bias, but the risk of a major data breach via new AI services. He explains how ProArch leverages the comprehensive Microsoft security platform to provide centralized security and identity control across data, devices, and AI agents, ensuring that user access and data governance (Purview) trickle down through the entire stack.Finally, Ben discusses the inherent friction of his dual CISO/CTO role, explaining his philosophy of balancing rapid feature deployment with risk management by defining a secure "MVP" baseline and incrementally layering on controls as product maturity and risk increase.About Ben WilcoxBen Wilcox is the Chief Technology Officer and Chief Information Security Officer at ProArch, where he leads global strategy for cloud modernization, cybersecurity, and AI enablement. With over two decades of experience architecting secure digital transformations, Ben helps enterprises innovate responsibly while maintaining compliance and resilience. He’s recently guided Fortune 500 clients through AI adoption and zero-trust initiatives, ensuring that security evolves in step with rapid technological change.Episode Linkshttps://www.proarch.com/https://www.linkedin.com/in/ben-wilcox/https://ignite.microsoft.com/en-US/home
    --------  
    39:24
  • This Week in AI Security - 13th November 2025
    In this week's episode, Jeremy covers seven significant stories and academic findings that reveal the escalating risks and new attack methods targeting Large Language Models (LLMs) and the broader AI ecosystem.Key stories include:Prompt Flux Malware: Google Threat Intelligence Group (GTAG) discovered a new malware family called Prompt Flux that uses the Google Gemini API to continuously rewrite and modify its own behavior to evade detection—a major evolution in malware capabilities.ChatGPT Leak: User interactions and conversations with ChatGPT have been observed leaking into Google Analytics and the Google Search Console on third-party websites, potentially exposing the context of user queries.Traffic Analysis Leaks: New research demonstrates that observers can deduce the topics of a conversation in an LLM chatbot with high accuracy simply by analyzing the size and frequency of encrypted network packets (token volume), even without decrypting the data.Secret Sprawl: An analysis by Wiz found that several of the world's largest AI companies are leaking secrets and credentials in their public GitHub repositories, underscoring that the speed of AI development is leading to basic, repeatable security mistakes.Non-Deterministic LLMs: Research from Anthropic highlights that LLMs are non-deterministic and highly unreliable in describing their own internal reasoning processes, giving inconsistent responses even to minor prompt variations.The New AI VSS: The OWASp Foundation unveiled the AI Vulnerability Scoring System (AI VSS), a new framework to consistently classify and quantify the severity (on a 0-10 scale) of risks like prompt injection in LLMs, helping organizations make better risk-informed decisions.Episode Links:https://cybersecuritynews.com/promptflux-malware-using-gemini-api/https://thehackernews.com/2025/11/microsoft-uncovers-whisper-leak-attack.html https://arstechnica.com/ai/2025/11/llms-show-a-highly-unreliable-capacity-to-describe-their-own-internal-processes/ https://futurism.com/artificial-intelligence/llm-robot-vacuum-existential-crisis https://www.scworld.com/resource/owasp-global-appsec-new-ai-vulnerability-scoring-system-unveiled https://arstechnica.com/tech-policy/2025/11/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool/ https://www.securityweek.com/many-forbes-ai-50-companies-leak-secrets-on-github/
    --------  
    15:38
  • This Week in AI Security - 6th November 2025
    In this week's episode, Jeremy looks at three compelling stories and a significant academic paper that illustrate the accelerating convergence of AI, APIs, and network security.API Exposure in AI Services: We discuss a path traversal vulnerability that led to the discovery of 3,000 API keys in a managed AI hosting service, underscoring that the API remains the exposed attack surface where data exfiltration occurs.AI Code Agent Traffic Analysis: Drawing on research from Chaser Systems, Jeremy breaks down the network traffic from popular AI coding agents (like Copilot and Cursor). The analysis reveals that sensitive data, including previous conversation context and PII, is repeatedly packaged and resent with every subsequent request, making detection and leakage risk significantly higher.LLM-Powered Malware: We cover a groundbreaking discovery by the Microsoft Incident Response Team (DART): malware using the OpenAI Assistants API as its Command and Control (C2) server. This new category of malware replaces traditional hard-coded instructions with an LLM-driven "brain," giving it the potential to coordinate malicious activity with context, creativity, and adaptability.The Guardrail Fallacy: Finally, Jeremy discusses an academic paper showing that strong, adaptive attacks can bypass LLM defenses against Jailbreaks and Prompt Injections with an Attack Success Rate (ASR) of over 90%. The research argues that simple guardrails provide organizations with a dangerous false sense of security.Episode Linkshttps://chasersystems.com/blog/what-data-do-coding-agents-send-and-where-to/https://embracethered.com/blog/posts/2025/claude-abusing-network-access-and-anthropic-api-for-data-exfiltration/ https://arxiv.org/pdf/2510.09023https://www.microsoft.com/en-us/security/blog/2025/11/03/sesameop-novel-backdoor-uses-openai-assistants-api-for-command-and-control/------Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
    --------  
    15:49

More Business podcasts

About Modern Cyber with Jeremy Snyder

Welcome to Modern Cyber with Jeremy Snyder, a cutting-edge podcast series where cybersecurity thought leaders come together to explore the evolving landscape of digital security. In each episode, Jeremy engages with top cybersecurity professionals, uncovering the latest trends, innovations, and challenges shaping the industry.Also the home of 'This Week in AI Security', a snappy weekly round up of interesting stories from across the AI threat landscape.
Podcast website

Listen to Modern Cyber with Jeremy Snyder, Where's My Money? and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

Modern Cyber with Jeremy Snyder: Podcasts in Family

Social
v8.0.2 | © 2007-2025 radio.de GmbH
Generated: 11/25/2025 - 10:22:19 PM