У нас вы можете посмотреть бесплатно Modern Cyber: Episode 86 - This Week in AI Security 22 Jan 26 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this episode of This Week in AI Security, Jeremy highlights a significant uptick in AI-related vulnerabilities and the shifting regulatory landscape. The episode covers everything from "Body Snatcher" flaws in enterprise platforms to the growing "industrialization" of AI-powered exploit generation. Key Stories & Developments: California's Cease and Desist to XAI: Following international concerns over sexualized deepfakes, California has issued a first-of-its-kind cease and desist order to XAI. This marks a major moment in regional AI oversight in the absence of federal legislation. ServiceNow "Body Snatcher" Flaw: A critical 9.3/10 CVE was identified in ServiceNow’s AI agent service. An unauthenticated endpoint allowed for Remote Code Execution (RCE), demonstrating that unauthenticated APIs remain a massive risk for agentic systems. Anthropic "Magic String" Crash: Researchers discovered a specific "magic string" that can effectively crash Anthropic LLM sessions. This specialized prompt acts as a denial-of-service against agentic workflows by killing the active interaction stream. Claude Code Data Leak: A default logging feature in Claude Code (vibe coding) saves full-text chat histories in a local directory. Developers committing this directory to public repos risk exposing their entire application logic and internal prompts to attackers. Eurostar Chatbot Exploit: A public-facing AI chatbot for Eurostar was found vulnerable to guardrail bypass and prompt injection. Ross Donald discovered that simply hardcoding a "validation" parameter in the API allowed him to bypass front-end checks. Industrialized Exploit Generation: A new study suggests that for a mere $30 token budget, an LLM can successfully generate an exploit for a known software vulnerability, potentially reducing the "time-to-exploit" to under 20 minutes. Episode Links https://thehackernews.com/2026/01/ser... https://appomni.com/ao-labs/bodysnatc... https://cy.md/opencode-rce/ https://techcrunch.com/2026/01/16/cal... https://mastodon.social/@Viss/1159231... https://sean.heelan.io/2026/01/18/on-... https://bsky.app/profile/aparker.io/p... Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo