У нас вы можете посмотреть бесплатно Dr. Sharon Levy (Rutgers). Discovering Implicit Social Biases in Large Language Models. или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Discovering Implicit Social Biases in Large Language Models. It is generally known that large language models harbor social biases. However, these biases are not always explicit and may arise unexpectedly. For example, a model may not directly say that women make better nannies than men but when asked to decide on a nanny between Carlos and Sarah, the model’s internal biases can emerge. In this talk, I will discuss my work in discovering implicit biases in large language models across various domains. I will first describe my research in AI + healthcare, where we analyze context-dependent questions. The second part will discuss gender biases in decision-making with a study on relationship conflicts. I will conclude with our investigation of social biases in the space of AI + education. Bio: Sharon Levy is an Assistant Professor at Rutgers University Computer Science. Her research focuses on natural language processing, with an emphasis on Responsible AI. She works on problems relating to fairness, trustworthiness, and safety. Her research is interdisciplinary and she regularly collaborates with academics in other disciplines, such as public health, gender studies, and political science. Previously, Sharon was a postdoctoral fellow at the Center for Language and Speech Processing (CLSP) at Johns Hopkins University and obtained her Ph.D. from the University of California Santa Barbara.