In this final episode of January 2026, Jeremy breaks down a high-stakes week in AI security, featuring critical framework flaws, cloud-native exploits, and a major security warning regarding a popular autonomous AI agent.
Key Stories & Developments:
Chainlit Framework Flaws: Two critical CVEs were identified in Chainlit, a popular Python package for building enterprise chatbots. These vulnerabilities, including Arbitrary File Read and Server-Side Request Forgery (SSRF), highlight the supply chain risks inherent in the rapidly growing AI development ecosystem.
Google Gemini Workspace Exploit: Researchers demonstrated how Gemini can be manipulated via malicious calendar invites. By embedding hidden instructions (similar to Ascii or emoji smuggling), attackers can trick the AI into exfiltrating sensitive user data, such as meeting details and attachments.
VS Code "Spyware" Plugins: Over 1.5 million developers were potentially exposed to malicious VS Code extensions impersonating ChatGPT. These plugins serve as "watering hole" attacks designed to harvest sensitive environment variables, credentials, and deployment keys.
Vertex AI Privilege Escalation: A novel attack chain in Google’s Vertex AI was disclosed. Attackers used a malicious reverse shell in a reasoning engine function to escalate privileges via the Instance Metadata Service, gaining master access to chat sessions, storage buckets, and logs.
The "Cloudbot" Warning: A deep dive into Cloudbot (now rebranded as ClawdBot), a general-purpose AI agent. Researchers found hundreds of instances sitting wide open on the internet, many providing full root shell access and exposing personal conversation histories and API keys.
Episode Links
https://www.theregister.com/2026/01/20/ai_framework_flaws_enterprise_clouds/
https://www.securityweek.com/weaponized-invite-enabled-calendar-data-theft-via-google-gemini/
https://cybernews.com/security/fake-chatgpt-vscode-extensions-compromised-developers/
https://gbhackers.com/google-vertex-ai-flaw/
https://www.insurancejournal.com/magazines/mag-features/2026/01/26/855293.htm
https://arxiv.org/pdf/2601.10338
https://techcrunch.com/2026/01/27/everything-you-need-to-know-about-viral-personal-ai-assistant-clawdbot-now-moltbot/
https://securityboulevard.com/2026/01/clawdbot-is-what-happens-when-ai-gets-root-access-a-security-experts-take-on-silicon-valleys-hottest-ai-agent/
https://jpcaparas.medium.com/hundreds-of-clawdbot-instances-were-exposed-on-the-internet-heres-how-to-not-be-one-of-them-63fa813e6625
https://www.bitdefender.com/en-us/blog/hotforsecurity/moltbot-security-alert-exposed-clawdbot-control-panels-risk-credential-leaks-and-account-takeovers
Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo