У нас вы можете посмотреть бесплатно AI Security Crisis: Jailbreaks, Prompt Injection & How to Protect Your Agents или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Sign up to get my learning resources: https://forms.gle/sRNjXnsurNxNAUQW7 2026 was predicted to be the year of agentic AI moving into enterprise production. But there’s one problem: ⚠️ AI agents are failing publicly. ⚠️ Jailbreaks are succeeding. ⚠️ Prompt injection is real. ⚠️ Trust is eroding. In this session, we break down the real AI security crisis and what product managers, founders, and builders must do before shipping agents. You’ll learn: • Why jailbreaking is an arms race • What prompt injection really is (and why it’s dangerous) • DeepSeek’s 100% jailbreak success case • Devon AI security failures • OpenClaw risks and credential takeover paths • Why LLM security is structurally hard • Why guardrails alone don’t work • The OWASP #1 LLM threat in 2025 • 3 practical remedies: Architecture, Red Teaming, AI SecOps • How to attack your own AI agent using Azure + Pirate • How to run real red team simulations This session is essential for: • AI Product Managers • Agent builders • Security engineers • Startup founders • Anyone shipping AI into production Security is not a feature you bolt on later. If you’re building AI agents without red teaming them, you’re gambling with trust. 00:00 – 2026: The year of agentic AI… but trust is breaking 03:20 – Why AI security failures are costing real money 06:10 – Jailbreaking explained (DAN attack & DeepSeek case) 11:30 – Why performance ≠ security 14:00 – Prompt injection explained (and why it’s worse) 18:30 – Devon AI security failure case study 23:40 – OpenClaw risks and real exploit paths 28:20 – Why AI security is structurally hard 33:00 – Why probabilistic guardrails fail 37:10 – The 3 remedies: Architecture, Red Teaming, AI SecOps 40:00 – KEL architecture (Dual LLM separation model) 46:30 – Red teaming tools (Microsoft, Nvidia, DeepTeam) 49:30 – AI SecOps: Monitoring, lifecycle, governance 54:00 – Live demo: Attacking an AI agent using Azure 59:30 – How jailbreak prompts bypass guardrails 01:04:00 – Reviewing attack results and vulnerabilities 01:08:00 – How to test your own AI agent 01:12:30 – Free lab & LinkedIn course walkthrough 01:16:00 – Next sessions: OpenClaw, Azure Foundry & AI PM bootcamp Whether you're a hobbyist or a professional looking to get a grasp on GenAI Product Management, feel free to join our AI PM community for more such sessions Fill out this form to receive an invitation to all my Free Live Sessions & Get Free AI Learning Resources: https://forms.gle/sRNjXnsurNxNAUQW7 Follow our LinkedIn Community Page: / mahesh-ai-pm-community Follow our Substack Page: https://substack.com/@myaicommunity 🔗 Check out my Cohort on Maven if you're looking to fastrack your AI PM Journey https://maven.com/mahesh-yadav/genaipm Don't forget to like, subscribe, and hit the bell icon to stay updated with our latest videos! #AIPM #ProductManagement #TechInterviews #AIJobs #MaheshYadav #ProductManager #GenAI #AIAgents #CareerPrep #InterviewTips #VibeCoding