У нас вы можете посмотреть бесплатно Even Simple Algorithms Can Discriminate | Lunchtime BABLing или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
🎙️ Even Simple Algorithms Can Discriminate | Lunchtime BABLing Broad, clear definitions are essential for protecting job applicants. In this Lunchtime BABLing segment, BABL AI CEO Dr. Shea Brown examines how even basic algorithms—like K-Nearest Neighbors—can result in discriminatory outcomes. He critiques the narrow scope of New York City’s Local Law 144, which currently focuses on certain machine learning tools while leaving simpler but equally impactful algorithms outside its regulatory reach. 👉 Lunchtime BABLing listeners can save 20% on all BABL AI courses with code BABLING20 📚 Courses: https://babl.ai/courses/ 🌐 Visit BABL AI: https://babl.ai/ 📩 Subscribe to The Algorithmic Bias Lab mailing list: https://www.algorithmicbiaslab.com/ 🔗 Follow BABL AI: https://linktr.ee/babl.ai ⏱️ Chapters 00:00 – Intro: Overview of NYC Local Law 144 and AEDTs 01:00 – What counts as an "automated employment decision tool"? 01:44 – Why machine learning is the focus of concern 02:42 – Concerns about the proposed rule's limited scope 03:50 – Reading the official AEDT definition 04:47 – How the law defines machine learning and AI systems 06:00 – Is the scope too narrow? Shea’s critique 07:06 – Problematic exclusions from the AEDT definition 08:10 – Hypothetical employer use of historical hiring data 09:25 – Defining applicant inputs like GPA and employment gaps 10:55 – Mapping employee data in parameter space 12:15 – Proxy variables and protected categories 14:00 – The risk of bias from human performance metrics 15:01 – Human bias in outcome labels 16:06 – Two critical assumptions: proxy variables and human bias 17:00 – Explaining the K-Nearest Neighbors (KNN) algorithm 18:30 – Using KNN to predict applicant success 20:00 – Why KNN is not covered by current AEDT rules 21:00 – How bias still exists despite algorithm simplicity 23:15 – Machine learning without training? Still biased 24:20 – A biased algorithm that evades regulation 25:30 – Why overly narrow definitions are dangerous 26:30 – Final thoughts: Expand the scope of Local Law 144 📌 What You’ll Learn 🔵 How simple algorithms can still produce biased hiring outcomes 🔵 Why NYC Local Law 144’s scope may be too narrow 🔵 The risks of excluding non–machine learning models from bias audits 🔵 How proxy variables and human bias shape algorithmic discrimination 💬 Join the Conversation 👍 Like this video if you want more deep dives into AI bias, regulation, and governance 🔔 Subscribe to Lunchtime BABLing for expert breakdowns of AI laws and compliance trends 💬 Share your thoughts in the comments—should Local Law 144 expand its scope? 🔖 Keywords Local Law 144, NYC bias audit, AI bias, AEDT, hiring algorithms, algorithmic bias, K-Nearest Neighbors, responsible AI, AI regulation, tech policy #AI #AlgorithmicBias #ResponsibleAI