У нас вы можете посмотреть бесплатно AI Transparency and Fairness in Credit Scoring [GiniMachine Approach] | Mark Rudak или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Mark Rudak, Product Owner of GiniMachine, delivers a technical deep-dive into automated bias detection for credit scoring systems. Learn how GiniMachine prepares data, measures fairness across multiple thresholds, and ensures regulatory compliance with EU AI Act, GDPR, US ECOA, and emerging APAC standards. 🎯 WHAT YOU'LL LEARN: • The regulatory landscape: EU AI Act/GDPR, US ECOA/Reg B, LATAM/APAC MAS FEAT • Defining sensitive features: Direct identifiers vs indirect proxies vs GDPR Art. 9 data • Dataset integrity: Representativeness, temporal validity, quality controls, restrictions • Real-world domain mapping: Auto loans, SME/Business, Telecom, Micro-loans • Bias metrics: Disparate Impact (DI) and Equal Opportunity Difference (EOD) • Adaptive threshold evaluation: Testing fairness at 0.25, 0.50, and 0.75 cutoffs • Case study: "SeniorCitizen" bias detection revealing 23% performance gap at strict thresholds • Analytical conclusions: Threshold sensitivity, calibration issues, actionable insights 📊 KEY TECHNICAL CONCEPTS: Disparate Impact (DI): • Ratio of favorable outcomes • Detects selection bias • Target range: 0.8–1.25 Equal Opportunity Difference (EOD): • Difference in True Positive Rates (Recall) • Measures performance fairness • Target: |EOD| - 0.1 Adaptive Threshold Testing: Why test at multiple thresholds? Fairness isn't static—it changes based on approval cutoff strictness. GiniMachine tests at: • 0.25 (low threshold - more approvals) • 0.50 (medium threshold) • 0.75 (high threshold - more conservative) Goal: Identify if models become MORE biased as they become more conservative. 💡 KEY INSIGHTS: Dataset Integrity Principles: 1. Representativeness - Minimum sample size per sensitive group (EU AI Act Art. 10 compliance) 2. Temporal Validity - Mirror current business cycle to avoid drift; always use freshest data 3. Quality - Verified labels (non-default/default borrowers) and balanced samples 4. Restriction - Synthetic augmentation ONLY for fairness testing, NEVER for training 👥 ABOUT THE SPEAKER: Mark Rudak is the Product Owner of GiniMachine, a no-code AI decision-making tool for credit scoring automation. Based in Vilnius, Lithuania (UAB HES Europe), Mark leads product development focused on transparency, fairness, and regulatory compliance for financial services firms globally.