У нас вы можете посмотреть бесплатно From Assistant to Agent: The 4A Model That Explains AI Autonomy Risk или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Organizations are rolling out “AI agents” faster than they understand what those systems can actually do—or what level of control they demand. In this talk, Seth Misenar introduces the Misenar 4A Model, a capability-based framework for understanding agentic AI security. Instead of relying on vague labels or vendor marketing, the model breaks AI systems into four clear autonomy levels: Assistant, Adjuvant, Augmentor, and Agent—each with very different risk profiles. -Using real incidents and recent failures, including: -AI coding agents deleting production databases -Automated attack chains driven by LLM tooling -Browser-based agents acting on poisoned inputs This session explains why these outcomes are not edge cases, but predictable results of mismatched autonomy and governance. You’ll learn: -Why most “agents” aren’t actually agents -Where the critical “DANGER CLOSE” boundary appears -How autonomy quietly escalates through tools, plugins, and approvals -What controls need to exist before AI is allowed to act How to evaluate agent claims without reading marketing tea leaves This talk is for security leaders, engineers, and anyone responsible for deploying or defending AI systems who needs a clear way to answer one question: How autonomous should your AI really be? Learn more about SEC411: AI Security Principles and Practices: GenAI and LLM Defense: https://go.sans.org/gJUsFW