У нас вы можете посмотреть бесплатно Bug Bounty Hunting for AI & LLM Exploits (NOT just Prompt Injection!!) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
AI bug bounty is finally real, but almost everything you see online is still focused on one thing: prompt injection. And yes, it matters, but it is only a small piece of what can actually go wrong in real production AI systems. So in this livestream, we are going deeper. We will hunt for real-world AI and LLM vulnerabilities by breaking the target down like an attacker would: understanding how the AI feature is wired into the product, identifying the mechanisms and notable objects we can target, and building a testing plan that focuses on impact. We will use two frameworks to guide the entire stream: ✅ OWASP Top 10 for LLM Applications ✅ MITRE ATLAS Then we will walk through practical tests for issues that show up constantly in modern AI apps, like: RAG exploitation (retrieval manipulation, data exposure, poisoning, unsafe ingestion) Tool and agent abuse (over-permissioned actions, trust boundary failures) Supply chain risks (plugins, libraries, integrations) Excessive Agency and Lack of Output Encoding Cloud AI pivot points (notebooks, isolation gaps, secrets exposure) MCP servers and AI-to-tool bridges (discovery and security testing) If you want a repeatable process for finding AI bugs that are actually report-worthy, this stream will give you a blueprint you can reuse on almost any AI-enabled target. Let’s go hunting.