У нас вы можете посмотреть бесплатно WHY 2025 - Securing AI requires life cycle thinking and reducing unintended consequences или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
https://media.ccc.de/v/why2025-278-se... AI is everywhere and where it isn't today, it most likely will be tomorrow. But jumping on the hype train and adding AI often does not sufficiently consider security. This talk walks you through cases of AI failures, how they've come about, and how they could have been avoided. We're also going over some projections of spectacular AI failures we're likely to see going forward. AI is everywhere and where it isn't today, it most likely will be tomorrow. But hype does not sufficiently consider security and AI has the ability to cause errors and failures the developers haven't considered. As was stated in the first Jurassic Park "they were so busy thinking if they could, they didn't stop to think if they should". So we're seeing more examples of failures than are needed for this talk that walks you through a few cases of AI failures, how they've come about, and how they could have been avoided. We're also going over some projections of what we're most likely going to see when you combine AI alignment issues, ability of AI agents to take action, and over confidence of developers in focusing if they could. Satu Korhonen https://program.why2025.org/why2025/t... #why2025 #Thesquarehole Licensed to the public under https://creativecommons.org/licenses/...