У нас вы можете посмотреть бесплатно Overexamined Algorithms and Overlooked Agency: Rethinking Online Harm или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Date Presented: 11/7/2025 Speaker: Homa Hosseinmardi, UCLA Visit links below to subscribe and for details on upcoming seminars: https://www.isi.edu/isi-seminar-series https://www.isi.edu/events Abstract: In recent years, critics of online platforms have raised concerns about the ability of recommendation algorithms to amplify problematic content with potentially radicalizing consequences. Yet most attempts to evaluate these claims suffer from a core methodological gap: the absence of appropriate counterfactuals—what users would have encountered without algorithmic recommendations—making it difficult to disentangle the influence of the algorithm from users' own intentions. To address this challenge, we first examined the scale of the problem and possible explanations. While we identified several distinct communities of news consumers within YouTube, from moderate to more extreme, we found little evidence that the YouTube recommendation algorithm is actively driving attention to problematic content. Overall, our findings indicate that trends in video-based political news consumption are determined by a complicated combination of user preferences, platform features such as recommendation systems, as well as the supply-and-demand dynamics of the broader web. We propose a novel method called ""counterfactual bots,"" which enables us to disentangle the role of the user from platform features on the consumption of highly partisan content. By comparing bots that replicate real users' consumption patterns with counterfactual bots that follow rule-based trajectories, we show that, on average, relying exclusively on the recommender results in less partisan consumption, with the effect being most pronounced for heavy partisan consumers. Speaker's Bio: Homa Hosseinmardi is an Assistant Professor of Data Science (DataX) and Computational Communication at UCLA, where she directs the OASIS Lab (Online and AI Systems’ Integrity & Safety). Her research takes a holistic, large-scale approach to understanding sociotechnical systems and information ecosystems, with a focus on safety and trustworthiness. She serves as an editor for the Journal of Quantitative Description: Digital Media, received the “Outstanding Research Award” during her Ph.D., and co-founded the CyberSafety workshop series. Her work has been featured in major media outlets and published in over 30 peer-reviewed papers, including top venues such as PNAS, Science Advances, TKDE, and IMWUT.