In this episode for May 14, 2026, Jeremy breaks down a watershed moment in cybersecurity: the first confirmed case of hackers using AI to discover and weaponize a zero-day vulnerability in the wild. We also explore a major self-reported PII leak in the banking sector and the expanding attack surface of AI development environments.
Key Episode Highlights:
The First AI-Generated Zero-Day: Google Threat Intelligence confirms hackers used AI to discover and weaponize a 2FA bypass in an open-source admin tool, marking a transition from theoretical risk to documented reality.
Banking Sector PII Leak: Community Bank (operating in PA, OH, and WV) filed an 8-K reporting that sensitive customer data, including SSNs and dates of birth, leaked into an AI application during training.
The "Beagle" Backdoor: Sophos uncovered a fake Claude-Pro website pushing trojanized installers that deploy a memory-resident backdoor targeting AI coding environments.
Framework Exploitation: Research reveals how prompt injection in popular frameworks like Semantic Kernel, LangChain, and CrewAI can escalate to full remote code execution (RCE).
Phonetic Obfuscation: New proof-of-concept research shows that LLMs can navigate phonetic misspellings to interpret malicious intent, effectively bypassing standard text filters.
Pixel-Perfect Phishing: Vercel’s v0.dev tool is being used by attackers to generate nearly perfect brand impersonations for Nike, Adidas, and Microsoft, making phishing detection significantly harder.
Secure AI Across Your Entire Organization
Unregulated AI usage and data leaks are the biggest threats to your organization's reputation. Get full visibility into your AI environment and block sensitive data exfiltration in 15 minutes. Book your FireTail demo: https://www.firetail.ai/schedule-your-demo
Episode Links
https://cloud.google.com/blog/products/identity-security/beyond-source-code-the-files-ai-coding-agents-trust-and-attackers-exploit
https://www.microsoft.com/en-us/security/blog/2026/05/07/prompts-become-shells-rce-vulnerabilities-ai-agent-frameworks/
https://www.bleepingcomputer.com/news/security/fake-claude-ai-website-delivers-new-beagle-windows-malware/
https://www.infosecurity-magazine.com/news/researchers-10-wild-indirect/
https://www.darkreading.com/cloud-security/hackers-ai-exploit-dev-attack-automation
https://www.darkreading.com/ics-ot-security/worlds-first-ai-driven-cyberattack-couldnt-breach-ot-systems
https://hackread.com/hackers-exploit-vercel-genai-phishing-sites/
https://bishopfox.com/blog/cve-2026-42208-pre-authentication-sql-injection-in-litellm-proxy
https://securityaffairs.com/191888/data-breach/braintrust-security-incident-raises-concerns-over-ai-supply-chain-risks.html
https://shape-of-code.com/2025/06/29/an-attempt-to-shroud-text-from-llms/
https://databreaches.net/2026/05/12/us-bank-reports-itself-for-revealing-customer-data-to-unauthorized-ai-application/