У нас вы можете посмотреть бесплатно AI’s Moral Compass: When Models Rival Human Ethicists - Danica Dillion или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
People used to say ‘morality is too complex for machines’. Then we started asking machines for advice about relationships, therapy, medical decisions, and legal dilemmas—because apparently we enjoy living dangerously. To help us think clearly about what’s happening, I interviewed Dianca Dillion. Danica looks at a surprising finding from Moral Turing Test–style research: in one study, participants rated ethical advice from GPT-4o as slightly more moral and trustworthy than advice from a well-known NYT column 'The Ethicist'. Danica Dillion is a postdoctoral researcher working with Dr Mirta Galesic at the Complexity Science Hub and Dr Kurt Gray at the Deepest Beliefs Lab at The Ohio State University, and previously an NSF Graduate Research Fellow at UNC Chapel Hill. Dianca Dillion: https://danicadillion.com/ Deep Beliefs Lab: https://www.deepestbeliefslab.com/ NYT column 'The Ethicist': https://www.nytimes.com/column/the-et... Mind & Machine Alignment Summit at Ohio State University: https://u.osu.edu/mindmachinealignment/ 00:00 Intro 01:05 Moral Turing Test Study Results 04:53 Human trust in AI - ceading moral authority? 08:28 If the MTT test study were done with the more powerful models of today, would the results be different? 10:45 Growing suspicion of AI, frontier labs responding to suspicion in how they tune the LLMs 12:50 Detectability 19:42 Is AI genuinely reasoning? Emergent symbolic reasoning 31:41 Are moral systems approximating some underlying moral structure? 34:20 Can AI discover ethics? 35:48 Can AI help us make progress in ethics? 40:39 Indirect normativity - choosing what to choose, indirect value discovery procedure 43:00 Bias in the training data (large samples of the internet are in english) 45:52 Would people trust responses less if they knew it came from AI? 48:17 Would cybernetic partnerships btw human & AI be trusted more than AI or humans alone? 1:07:49 Who gets the most say in the values that guide AI? 1:11:08 Overconfidence in AI & epistemic humility 1:12:45 Maximisation behaviour & insatiability 1:13:23 Can AI be more moral than humans? 1:27:17 Use of AI to reduce prejudice and partisan animosity? 1:29:43 Mind & Machine Alignment Summit at Ohio State University - https://u.osu.edu/mindmachinealignment/ Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Buy me a coffee? https://buymeacoffee.com/tech101z Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P... Kind regards, Adam Ford Science, Technology & the Future - #SciFuture - http://scifuture.org