У нас вы можете посмотреть бесплатно Harnessing Human Uncertainty to Improve AI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Artificial intelligence (AI) holds great promise for complex decision-making tasks, such as interpreting medical images. However, human errors during AI development can introduce biases into models and create misalignment between machines and human users. Despite advances in unsupervised machine learning, most systems still rely on human-labeled data -- a massive industry powered by data annotation companies. These companies often aggregate labels from multiple annotators to improve accuracy, leveraging the “Wisdom of the Crowd.” In this talk, I examine how human annotators are subject to systematic biases. These biases can propagate from individuals to crowds to machine learning models. I’ll present cognitive-inspired data engineering methods that correct for these biases using well-established models of human subjective probability judgment. These approaches can improve model accuracy, calibration, and alignment with expert decision-makers. This work underscores the importance of understanding human cognition and decision-making in the training and development of AI systems.