У нас вы можете посмотреть бесплатно Building security around ML: Dr. Andrew Davis или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
The field of Adversarial ML has been active since at least 2013 and despite over a decade of attempts to make models more robust to imperceptible changes in the input, attack methods still outpace the ability to defend neural networks and other machine learning models. In this talk, we'll get into why adversarial examples are becoming increasingly relevant with the advent of agentic multimodal LLMs and what we can do to defend these models. Recorded live in San Francisco at the AI Engineer World's Fair. See the full schedule of talks at https://www.ai.engineer/worldsfair/20... & join us at the AI Engineer World's Fair in 2025! Get your tickets today at https://ai.engineer/2025 About Dr. Andrew Dr. Andrew Davis is Chief Data Scientist at HiddenLayer, where he leads research defending and detecting attacks on ML systems. Coming from a cybersecurity background, Andrew has been interested in the problem of solving adversarial examples since seeing the "Intriguing Properties of Neural Networks" poster at the ICLR 2014 workshop.