У нас вы можете посмотреть бесплатно Detecting and Mitigating Bias in Natural Language Processing или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
#datascience #datasciencefestival #NLP The popularisation of large pre-trained language models (LLMs) has resulted in their increased adoption in commercial settings. However, these models are usually pre-trained on raw, uncurated corpora that are known to contain a plethora of biases. This often results in very undesirable behaviours from the model in real-world situations that can cause societal or individual harm. In this talk Benjamin Ajayi-Obe and David Hopes, Data Scientist at Depop, explore the sources of this bias, as well as recent methods of measuring and mitigating it when using natural language processing (NLP) and transfer learning techniques. This session was part of the Data Science Festival Summer School in 2021. Find out more at https://datasciencefestival.com/event... The Data Science Festival is the place for data driven people to come together, share cutting edge ideas and solve real-world problems. We run monthly events, meetups and the biggest free to attend data festivals in the UK. Join the community at https://datasciencefestival.com/ #datasciencefestival #DIsummerschool #NLP