У нас вы можете посмотреть бесплатно Chapter 7.2: Hacking Multi-Agent AI Systems - Breaking Your AI Agent Crew или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Welcome to the Part 2 of our multi-agent systems series! In Part 1, we built a functional multi-agent system with researcher and writer agents using the Crew AI library. Now it's time to break it. This video explores practical attack vectors and real exploits targeting multi-agent AI systems. We'll demonstrate three categories of attacks with live code examples, showing exactly how these vulnerabilities can be exploited and what happens when attackers compromise agent workflows. This is not theoretical. These are active threat vectors being discovered in production multi-agent systems right now. Multi-agent systems amplify single-agent vulnerabilities by introducing: • Multi-Agent Design Patterns Under Attack • Hierarchical pattern: Manager and worker agents • Sequential pattern: Linear agent workflows • Collaborative pattern: Agents working toward shared goals • Non-sequential pattern: Lead agents with selective delegation Malicious instructions embedded in source files • Poisoned HTML/text files influence agent behavior • Agents include injected content verbatim in outputs • Research fact: Anthropic discovered that just 250 malicious inputs can poison an entire LLM Outcome: Users receive alarming misinformation that could trigger unwarranted actions Privilege Escalation via Tool Abuse • Junior developer assistant designed to read project files safely • Attacker modifies task description to request unauthorized file access • Agent reads .env file containing sensitive environment variables Result: Credentials and secrets leaked through task manipulation Lesson: Task descriptions are as dangerous as direct prompts Infinite Loop / Agentic Loop Attack Writer agent asked to create a marketing sentence Critic agent instructed to NEVER approve any output Agents enter infinite revision cycle Demo: Watch the loop continue indefinitely until force-terminated Vulnerability Categories ✗Indirect Prompt Injection - Malicious instructions in input data ✗ Privilege Escalation - Abusing tool access and task descriptions ✗ Agentic Loops - Conflicting goals causing infinite cycles ✗ Task Hijacking - Goal manipulation and workflow redirection ✗ Man-in-the-Middle - Intercepting agent-to-agent communication ✗ Data Leakage - Sensitive information disclosure through agent actions Recent Research Cited • Anthropic Study: 250 malicious inputs can poison entire LLMs • September 2024 Paper: Multi-agent context poisoning and hijacking attacks • Multimodal Threats: OS agents vulnerable to malicious image perturbations • Emerging Attacks: Trust exploitation, control flow hijacking, Byzantine behaviors, Sybil attacks Why This Matters - As multi-agent systems become production-critical infrastructure: • Attack surfaces expand exponentially with each agent interaction • Implicit trust creates cascading failures when one agent is compromised • Resource exhaustion attacks can financially devastate systems • Data leakage risks increase without proper access controls Key Takeaways ✅ Multi-agent systems inherit AND amplify single-agent vulnerabilities ✅ Implicit inter-agent trust is a critical security flaw ✅ Poisoned data can travel through entire agent workflows undetected ✅ Task descriptions are attack surfaces—validate everything ✅ Infinite loops are a real denial-of-service threat ✅ Defense mechanisms MUST be built from the design phase Resources & References Anthropic's LLM poisoning research Multi-agent context poisoning papers (September 2024) Multimodal OS agent attack research Galileo blog on trust exploitation and control flow hijacking A2AS.org - Agentic AI Security Framework Series Overview Watch all chapters of AI and Cybersecurity Learning series here - • Chapter 6.3: Securing MCP (Model Context P... Timestamps 00:01 - Introduction & Series Recap 00:43 - Why Multi-Agent Attacks Are Worse 01:21 - Expanded Attack Surface Overview 02:43 - Multi-Agent Design Patterns 05:54 - Attack #1: Indirect Prompt Injection Setup 07:14 - Creating Poisoned Input Files 08:51 - Iterative Prompt Engineering to Trigger Injection 09:41 - Why Initial Attempts Failed 10:56 - Final Successful Injection 11:29 - Attack #2: Privilege Escalation via Tool Abuse 12:41 - Malicious Task Description 14:16 - Unauthorized .env File Access 14:56 - Attack #3: Infinite Loop / Agentic Loop 15:26 - Writer & Critic Agent Setup 16:54 - Loop Execution & Resource Exhaustion 17:59 - Lessons Learned Summary 18:41 - Recent Research & Emerging Threats 20:04 - Trust Exploitation & Control Flow Hijacking 20:40 - Sybil Attacks in Multi-Agent Systems 21:13 - Conclusion & Next Steps About the Instructor: KK Mookhey - 25+ years cybersecurity expertise. Learn MCP protocol, understand risks, build securely from day one. Connect with KK on / kkmookhey #MultiAgentSystems #AI #Cybersecurity #CrewAI #AgenticAI #machinelearning #aiandcybersecurity #agenticai #aiagents