У нас вы можете посмотреть бесплатно Margaret Roberts & Jeffrey Ding: Censorship’s Implications for Artificial Intelligence или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this webinar, Jeffrey and Molly will discuss an upcoming paper co-authored by Molly and Eddie Yang. Abstract: While artificial intelligence provides the backbone for many tools people use around the world, recent work has brought attention to the potential biases that may be baked into these algorithms. While most work in this area has focused on the ways in which these tools can exacerbate existing inequalities and discrimination, we bring to light another way in which algorithmic decision making may be affected by institutional and societal forces. We study how censorship has affected the development of Wikipedia corpuses, which are in turn regularly used as training data that provide inputs to NLP algorithms. We show that word embeddings trained on the regularly censored Baidu Baike have very different associations between adjectives and a range of concepts about democracy, freedom, collective action, equality, and people and historical events in China than its uncensored counterpart Chinese language Wikipedia. We examine the origins of these discrepancies using surveys from mainland China and their implications by examining their use in downstream AI applications. Molly Roberts is an Associate Professor in the Department of Political Science and the Halıcıoğlu Data Science Institute at the University of California, San Diego. She co-directs the China Data Lab at the 21st Century China Center. She is also part of the Omni-Methods Group. Her research interests lie in the intersection of political methodology and the politics of information, with a specific focus on methods of automated content analysis and the politics of censorship and propaganda in China. Jeffrey Ding is the China lead for the Centre for the Governance of AI. Jeff researches China’s development of AI at the Future of Humanity Institute, University of Oxford. His work has been cited in the Washington Post, South China Morning Post, MIT Technology Review, Bloomberg News, Quartz, and other outlets. A fluent Mandarin speaker, he has worked at the U.S. Department of State and the Hong Kong Legislative Council. He is also reading for a D.Phil. in International Relations as a Rhodes Scholar at the University of Oxford. Bibliography: Sweeney, Latanya. "Discrimination in online ad delivery." Communications of the ACM 56.5 (2013): 44-54. Koenecke, Allison, Andrew Nam, Emily Lake, Joe Nudell, Minnie Quartey, Zion Mengesha, Connor Toups, John R. Rickford, Dan Jurafsky, and Sharad Goel. "Racial disparities in automated speech recognition." Proceedings of the National Academy of Sciences 117, no. 14 (2020): 7684-7689. Davidson, Thomas, Debasmita Bhattacharya, and Ingmar Weber. "Racial Bias in Hate Speech and Abusive Language Detection Datasets." Proceedings of the Third Workshop on Abusive Language Online. 2019. Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. "Semantics derived automatically from language corpora contain human-like biases." Science 356.6334 (2017): 183-186. Zhao, Jieyu, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. "Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints." In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2017. Li, Shen, Zhe Zhao, Renfen Hu, Wensi Li, Tao Liu, and Xiaoyong Du. "Analogical Reasoning on Chinese Morphological and Semantic Relations." In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 138-143. 2018.