У нас вы можете посмотреть бесплатно Modern Cyber: Episode 92 - This Week in AI Security 26 Feb 2026 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this episode of This Week in AI Security for February 26, 2026, Jeremy covers another packed week featuring AI privacy boundary failures, agent-driven outages, AI-accelerated cybercrime, Android malware innovation, platform responsibility debates, and the continued risks of vibe-coded applications. Key Stories & Developments: Microsoft Copilot Confidential Email Bug: Microsoft Copilot was found summarizing confidential emails due to a flaw in the Copilot Chat “Work” tab. AI Agent Triggers AWS Bedrock Outage: An outage involving Amazon Bedrock exposed the risks of agentic coding systems with broad permissions. AI-Powered Assembly Line for Cybercrime: A Russian-speaking attacker breached FortiGate firewalls across 55 countries in just five weeks using AI as a force multiplier. PromptSpy: Android Malware Using Live LLM Command & Control: PromptSpy became the first known Android malware to dynamically leverage Google Gemini at runtime. Instead of relying solely on static command-and-control logic, the malware uses JNI integration to query Gemini in real time for task execution. ChatGPT, Mental Health, and Law Enforcement Boundaries: Following a shooting incident in Tumbler Ridge, Canada, investigators discovered significant usage of ChatGPT by the suspect prior to the event. Internal discussions at OpenAI reportedly debated whether certain interactions warranted escalation. LLM-Generated Passwords Lack Entropy: Security researchers highlighted that passwords generated by LLMs exhibit approximately 80% less entropy than those created by traditional password generators. Vibe-Coded Security Suite Exposes Master Keys: A Reddit thread revealed that a suite of “RR”-branded tools were entirely vibe-coded applications with severe security flaws. Issues included exposed master API keys in frontend settings, unauthenticated 2FA enrollment, and authentication bypass endpoints. Anthropic Moves from Detection to Remediation: Anthropic introduced tooling aimed at moving beyond passive source-code analysis toward automated remediation of vulnerabilities. Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo Episode Links https://www.bleepingcomputer.com/news... https://www.thestandard.com.hk/tech-a... https://www.reuters.com/business/reta... https://www.bleepingcomputer.com/news... https://cyberandramen.net/2026/02/21/... https://www.bleepingcomputer.com/news... https://techcrunch.com/2026/02/21/ope... https://www.techradar.com/pro/securit... / huntarr_your_passwords_and_your_entire_arr... https://www.anthropic.com/news/claude...