У нас вы можете посмотреть бесплатно Securing LLMs: From Pickle File Risks to Agentic Excessive Agency или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Welcome to the "Engine Room" of AI Security. In this deep dive, we break down Domain 3 of the AISM certification, moving beyond compliance checklists into the technical mechanisms required to secure dynamic AI systems. In this video, we cover: • XAI Fundamentals: The critical differences between Interpretability, Explainability, and Transparency. We compare "Whitebox" vs. "Blackbox" models and the trade-offs between LIME (local approximation) and SHAP (Game Theory based). • Securing the Supply Chain: Why "Pickle" files are a major security risk and why you must migrate to Safetensors. We also discuss the AI Bill of Materials (AI BOM) and model signing with tools like Cosign. • Generative AI Threats: A look at the OWASP Top 10 for LLMs 2025, specifically Prompt Injection (LLM01) and Excessive Agency (LLM06). • Architecture & Governance: Implementing Zero Trust for AI pipelines, RAG security via metadata-based access control, and ISO 42001 alignment. Whether you are studying for the AISM exam or architecting secure AI agents, this session covers the fidelity, usability, and efficiency metrics you need to know