У нас вы можете посмотреть бесплатно Research Integrity & Responsible Use of GenAI (3 of 4) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Prof. Nick Baker, University of Windsor, Canada, Office of Open Learning, presents on "Responsible and ethical use of generative AI in research". Outline: The integration of generative artificial intelligence (AI) into institutional and personal technologies and the emergence of specialised AI agents targeted at various parts of the research process is fundamentally changing research practices across all academic disciplines. While discussions thus far have largely centred on preserving research integrity, generative AI's transformative potential in research demands broader consideration. AI assistants are already enabling novel approaches to data analysis, accelerating discovery processes, supporting ideation, expanding research capacity, identifying cross-disciplinary connections, and providing accessibility supports for researchers with disabilities or neurodivergence. They are also already performing at or above human level in many domains, which raises questions about the role humans will play in research in the future. Research Ethics Boards are already facing the widespread use of AI in all aspects of the research enterprise, but are they prepared for independent AI agents leading research projects, or as the subject of proposals? The use of generative AI in research raises similar concerns to its applications in other areas, including output accuracy and bias, data security, privacy, and the perpetuation of colonial research practices and norms. Additionally, recent partnerships between commercial AI companies and academic publishers raise ethical questions about the use of academic work for model training without author consent. These concerns have long existed in the open publishing and open science communities, aligning with the larger concern about commodification of knowledge by extractive technologies and the role of open science. While acknowledging those concerns, organisations including UNESCO and the Open Data Institute emphasise that open science and open data are crucial for ethical and equitable AI development, potentially serving as a counterweight to online misinformation and the biases that exist in other training datasets. While work remains to make open datasets AI-ready, generative AI could ultimately make these datasets more accessible and interpretable. This presentation will explore some of the practical applications of AI for researchers, consider how to use these responsibly and ethically, and discuss some of the implications for research ethics. Speaker Bio: Nick Baker is the Director of the Office of Open Learning at the University of Windsor, Canada and a professor of educational development with a focus on educational technologies in higher education. He teaches at the intersection of science, technology, ethics, policy, sustainability, and society, and is currently the co-chair of the UWindsor Academic Policy Committee's subcommittee on AI. The past chair of the Ontario Universities Council on eLearning and a founding board member of eCampus Ontario, Nick was recently named in EdTech Magazine's Top 30 Higher Ed IT Influencers to follow in 2024. Nick's relationship to AI goes back more than 20 years to early work applying machine vision to wildlife management in field settings.