У нас вы можете посмотреть бесплатно AI's Fatal Flaw: The 3 Attacks That Deal with Safety and Security of the Public или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
My name is Hadi Ataei and in this video I talked about three points of vulnerability with the world of AI and Machine Learning. Is AI safety a myth? While we celebrate the breakthroughs of Large Language Models and autonomous systems, a hidden frontier of vulnerabilities is emerging that could put public security at risk. In this video, I talk about the "Fatal Flaw" of modern Artificial Intelligence: the three specific attack vectors that bypass safety guardrails and manipulate machine reasoning. From contaminating the data that models learn from to tricking deployed systems with "invisible" noise, we explore the high-stakes world of adversarial machine learning. What we’ll cover: Poisoning: How a compromised dataset can create "sleeper agents" within an AI. Evasion: The mathematical "optical illusions" that trick self-driving cars and security scanners. Extraction & Inversion: The silent theft of intellectual property and the risk to your private data. Whether you are an AI developer, a cybersecurity professional, or just someone curious about the future of tech, understanding these vulnerabilities is the first step toward building a more resilient world. Timestamps: 0:00 - Introduction: What and Why? 2:13 - Attack #1: Data Poisoning 3:09 - Attack #2: Evasion & Adversarial Noise 5:13 - Attack #3: Extraction (The Theft of the Brain) #AI #CyberSecurity #ArtificialIntelligence #MachineLearning #TechExplainer #AISafety