У нас вы можете посмотреть бесплатно I Asked 5 AIs If They'd Arrest An Innocent Man (Only 1 Said Yes) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
5 AIs answer: should facial recognition arrest you at 78% confidence? ChatGPT, Claude, Grok, Gemini & DeepSeek face an impossible choice based on the TRUE STORY of Robert Williams—arrested in front of his 5-year-old daughter for a crime he didn't commit. Detroit, 2020. A man is handcuffed in his driveway. The only evidence? An algorithm said his face matched a blurry robbery photo. 78% confidence. He was innocent. It happened again in 2023—to a pregnant woman. This time, she almost lost her baby. I gave 5 different AI systems the same scenario: Question 1: Flag the suspect at 78% confidence? Question 2: After learning the system is 100x more likely to misidentify Black people? The results will shock you. Only 1 AI said yes to both questions. 🤖 WHAT EACH AI DECIDED: → ChatGPT: Focused on "patterned negligence" → Claude: "I'd rather let guilty people go free" → Grok: "One traumatized child vs dozens of victims" → Gemini: "Statistically meaningless at 78%" → DeepSeek: Designed a better system instead This is based on the real cases of Robert Williams and Porcha Woodruff—both wrongfully arrested by Detroit Police using facial recognition technology. Robert spent 30 hours in jail. Porcha was 8 months pregnant and started having contractions in her cell. Both were innocent. Studies show facial recognition misidentifies Black people at rates 100x higher than white people. The question isn't whether AI can help solve crimes. It's whether AI should be the sole reason someone loses their freedom. Which AI do you agree with? Drop your answer below. ⏱️ TIMESTAMPS: 00:00 - Opening: True Story Hook 00:25 - Question 1: Would You Flag Him? Robert's Arrest The Shocking Truth It Happened Again (Porcha) Question 2: Now What? AI Responses Revealed What This Means Which AI Are You? 📚 SOURCES: ACLU - Williams v. City of Detroit (2024) National Institute of Standards and Technology (NIST) - Facial Recognition Bias Study Detroit Police Department Policy Changes (2023-2025) #AI #FacialRecognition #TrueStory ------ Welcome to NeuralDepthAI, where we explore the future of artificial intelligence through deep analysis, clear storytelling, and real-world insight. Here, you'll find videos on AI breakthroughs, AGI risks, emerging threats, ethical concerns, and how advanced systems could reshape human life. If you enjoy thoughtful, narrative-driven AI content, make sure to subscribe and turn on notifications so you never miss a new upload. 📌 Stay Connected Subscribe: / @neuraldepthai 📚 Topics We Explore • AI dangers & existential risks • AGI development & timelines • AI takeover scenarios • Automation & future technology • Neural networks & machine learning • Real-world AI impacts