У нас вы можете посмотреть бесплатно Did Researchers Just Solve Prompt Injection Protection? или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Dive into the mechanics of prompt injection attacks on large language models to and discover why “sanitizing input” isn’t enough to keep your LLMs safe. In this video, we: Define prompt injection and show real‑world examples (direct vs. indirect) Explore the range of threats: from malware deployment to data theft Review traditional defenses (principle of least privilege, human‑in‑the‑loop checks, prompt sandwiching, low‑temperature decoding, dual LLM evaluation) Introduce CaMeL (Capabilities for Machine Learning), Google DeepMind’s March 2025 breakthrough that tags data with security metadata and uses a quarantined AST‑based interpreter to enforce policies at every instruction Assess CaMeL’s strengths and limitations, and outline best practices for rigorous auditing Resources & References • CaMeL research paper (DeepMind, March 2025): [https://arxiv.org/abs/2503.18813] • Simon Willison’s Weblog on Dual LLM Defense (April 2023): [https://simonwillison.net/2023/Apr/25...] • Related reading on prompt‑sandwich techniques: [https://learnprompting.org/docs/promp...] 🔔 Subscribe for more deep-dives into AI and Data Science 👍 Like if you found this useful 💬 Comment below with your questions or experiences defending against prompt injection #PromptInjection #LLMSecurity #CaMeL #AIsecurity #DeepMind