У нас вы можете посмотреть бесплатно 16. AI and Evidence: When Nobody is Accountable или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
What happens when AI is used to analyse human behaviour and relationships, and the output is treated as reliable evidence in a formal process against another person? Dr Craig Webber (https://www.southampton.ac.uk/people/...) , School Lead for the MA in Artificial Intelligence at the University of Southampton, (https://www.online.southampton.ac.uk/...) joins the podcast to explore a growing and largely unaddressed risk at the intersection of AI and institutional decision making. Craig introduces a concept with profound implications for anyone who has ever been on the receiving end of a formal process - the confident confabulation. Large language models don't flag uncertainty. They don't interrogate the premise of the question they're asked. They reflect back whatever narrative they're fed, dressed in language that carries the appearance of authority and expertise. The result can be devastating. And the frameworks for accountability when it goes wrong are, at best, underdeveloped. This conversation explores how sycophantic AI reflects back and amplifies the narratives it receives, how AI generated analysis gets laundered into apparently human authored reports, and what it means when confident confabulations enter high stakes processes where people's lives and reputations are at stake. Craig returns throughout to two words. Legitimacy - does the process that produced this output have any genuine claim to being a reliable account of what actually happened? And accountability - when a confident confabulation causes real harm to a real person, who answers for that? Not the AI. Not the platform. Not the person who fed it the narrative and accepted what it reflected back without question. Currently, the answer is nobody. AI Ethics Now Exploring the ethical dilemmas of AI in Higher Education and beyond. A University of Warwick IATL Podcast This podcast series was developed by Dr Tom Ritchie and Dr Jennie Mills, the module leads of the at the University of Warwick. The IATL module "The AI Revolution: Ethics, Technology, and Society" (https://warwick.ac.uk/fac/cross_fac/i...) module explores the history, current state, and potential futures of artificial intelligence, examining its profound impact on society, individuals, and the very definition of 'humanness.' This podcast was initially designed to provide a deeper dive into the key themes explored each week in class. We want to share the discussions we have had to help offer a broader, interdisciplinary perspective on the ethical and societal implications of artificial intelligence to a wider audience. Join each fortnight for new critical conversations on AI Ethics with local, national, and international experts. We will discuss: • Ethical Dimensions of AI: Fairness, bias, transparency, and accountability • Societal Implications: How AI is transforming industries, economies, and our understanding of humanity • The Future of AI: Potential benefits, risks, and shaping a future where AI serves humanity If you want to join the podcast as a guest, contact Tom.Ritchie@warwick.ac.uk.