У нас вы можете посмотреть бесплатно AI Persuasion and Voter Influence in Human-Centric Dialogues или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This research investigates the power of artificial intelligence to sway voter opinions through interactive, personalized dialogues across different nations and electoral contexts. By conducting experiments in the United States, Canada, and Poland, researchers demonstrated that AI models can effectively shift candidate preferences and increase the intent to vote. The study reveals that personalization and the use of factual evidence are the most successful techniques for achieving these persuasive outcomes. Interestingly, the AI proved most influential when engaging with out-party supporters, showing a greater capacity to change minds than traditional static advertisements. While the bots were generally accurate, their effectiveness diminished significantly when they were restricted from using facts and reason. Ultimately, the findings highlight a profound potential for generative AI to impact democratic processes by tailoring political messaging to individual concerns. *Thomas H. Costello, Gordon Pennycook, and David G. Rand* are a team of behavioural science researchers whose collaborative work, as detailed in the sources, investigates the capacity of **artificial intelligence to influence and change human beliefs**. *Researcher Profiles* *Thomas H. Costello:* He is affiliated with the *Department of Psychology at American University* and the **Department of Social and Decision Sciences at Carnegie Mellon University**. He served as the corresponding author for the team's high-profile study on debunking conspiracy theories. *Gordon Pennycook:* He holds affiliations with the *Department of Psychology at Cornell University* and the **Hill/Levene Schools of Business at the University of Regina**. His research frequently addresses reasoning errors and the psychological mechanisms behind the reception of misinformation. *David G. Rand:* He is a professor at the *MIT Sloan School of Management* and is also affiliated with *Cornell University’s* Department of Information Science and the SC Johnson School of Business. The sources note that some of his other research has received funding from technology firms such as Google and Meta. *Key Research Papers* The researchers have co-authored two major papers mentioned in the sources that explore human-AI interaction: 1. *"Durably reducing conspiracy beliefs through dialogues with AI" (2024):* Published in **Science***, this study challenged the "conventional wisdom" that conspiracy believers are immune to facts. By using **GPT-4 Turbo* to provide bespoke, evidence-based counterarguments, the researchers reduced participants' belief in conspiracies by an average of **20%**. These effects were found to be **highly persistent**, remaining virtually undiminished two months after the initial three-round dialogue. 2. *"Persuading Voters using Human-AI Dialogues":* This research examined the power of AI to influence voter attitudes during elections in the *United States, Canada, and Poland**. The study found that AI models could significantly shift **candidate preferences**, achieving effects larger than those typically produced by traditional political video advertisements. A significant finding in this work was that while AI models generally relied on facts, those advocating for **right-leaning candidates* across all three countries tended to make more inaccurate claims. *Core Theoretical Contributions* According to the sources, the work of Costello, Pennycook, and Rand suggests a more *optimistic view of human reasoning* than previously held. Their findings indicate that people are not necessarily "blinded" by psychological needs or motivations to believe untruths; rather, they are often capable of updating their views when met with *sufficiently compelling and personalised evidence**. Their research aligns with a theoretical perspective that emphasises the role of **analytic thinking* and evidence-based deliberation in correcting epistemically suspect beliefs.