У нас вы можете посмотреть бесплатно Campus Security - Behavioral AI and Campus Surveillance или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Ari Sen was a student at UNC-Chapel Hill when he filed a simple records request. He wanted to know what school officials were saying behind the scenes about a wave of campus protests over a Confederate statue called Silent Sam. His request was denied. So he kept digging—and what he found turned into something much bigger. Buried in the university’s public records log was a contract for a product called Social Sentinel—a piece of AI software marketed as a “behavioral risk” tool for identifying threats like suicide or school shootings by scanning public social media. But as Ari kept turning pages, another purpose emerged: monitoring protests, tracking criticism, even flagging individual students by name. In this episode, we sit down with Ari—now a data reporter with The Dallas Morning News and the Pulitzer Center’s AI Accountability Network—to walk through how this software actually works, how it’s marketed (publicly vs. privately), and how a tool built for safety crept into surveillance. From keyword watchlists to email scanning to social media user tracking, we trace the spread of Social Sentinel from universities to K–12 schools, and now into workplaces and public venues. What starts as a story about one tool becomes a bigger conversation about trust, privacy, and what happens when institutions adopt AI with little oversight. One student found a file. Inside that file was a system built to watch him back.