У нас вы можете посмотреть бесплатно Participant Preference for Human Versus AI Empathy Expressions или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Joshua Wenger, psychology, Penn State Talk Title (abstract below): Participant Preference for Human Versus AI Empathy Expressions This presentation is part of the Moral Psychology Research Group 2024 conference on Penn State's University Park Campus in November 2024. Hosted by the Consortium on Moral Decision-Making and Daryl Cameron, associate professor of psychology, Penn State, and senior research associate in tthe Rock Ethics Institute. Sponsored by: Penn State College of the Liberal Arts Penn State Rock Ethics Institute Penn State Social Science Research Institute The McCourtney Institute for Democracy Penn State Department of Philosophy Penn State Department of Psychology Penn State University Libraries ABSTRACT: "Great ethical debate exists around empathic AI, with many discounting its empathy expressions as “fake”. Despite this, people consistently rate AI messages as more empathetic than human messages. Although past work has explored these ratings of AI empathy expressions, little research has examined whether people actively seek out such messages, or instead prefer human messages. The present research investigates whether people choose to receive empathetic expressions more from human or AI interaction partners. Participants read and imagined themselves in vignettes depicting various unfortunate circumstances (stepping on a thumb tack, losing a job, etc.). Following each individual vignette, participants chose between receiving an empathetic response from a human or AI. Participants also rated how empathetic they found each response. This research explores overall choice preference between human and AI empathy expressions, whether this preference varies between empathy and compassion or physical and emotional suffering, and how this preference relates to response ratings of empathy (i.e., if AI empathy expressions are rated as more empathetic, whether this actually translates to choosing to receive empathy from AI)."