У нас вы можете посмотреть бесплатно What If AI Researchers Have Been WRONG About Prompt Injection All Along? или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Should AI agents have credentials at all or earn access in real time like a contractor? AI security researcher Vineeth Sai Narajala (Senior Technical Leader at Cisco and contributor to OWASP initiatives) answers five bold questions about agentic AI security with no hedging. 🚨 Questions Covered: • Should AI agents have base credentials or just-in-time access? • Do LLMs need identity systems like passports? • Should prompt injection be treated as a crime? • Is human-in-the-loop just “babysitting”? • Should OWASP deploy its own enforcement agent? 🔐 Key Insights: → AI agents need a hybrid credential model: base permissions + privileged access requests → Identity systems may need to evolve beyond traditional authentication models → Prompt injection is a structural vulnerability, not a criminal exploit → Sandboxing can reduce blast radius without overwhelming developers → Human-in-the-loop is contextual. Not every agent needs it As AI agents move from chatbots to autonomous actors, permission design, identity infrastructure, and sandboxed execution become critical architectural decisions. If you’re building AI agents, designing LLM systems, or working on secure-by-design AI infrastructure, this discussion delivers practical and nuanced insight. Watch the full episode for more deep dives into agentic AI, AI security standards, and autonomous system governance: