У нас вы можете посмотреть бесплатно Google DeepMind Presents Deliberate Lab for Human-AI Experiments | The Frontier Series: Episode 1 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Google DeepMind's PAIR (People + AI Research) team present Deliberate Lab, an open-source platform for large-scale, real-time behavioral experiments that supports both human participants and large language model (LLM)-based agents, for conducting online research on human and LLM group dynamics. In this conversation, Jerome Wynne, Senior AI Research Engineer at Prolific, sits down with Crystal Qian, Senior Research Scientist at Google DeepMind who led the team behind this research, to talk about creating LLM simulacra of human participants and the surprising finding that some models mirror human biases while others naturally select optimal leaders. We get into the design challenges of building AI agents that can participate in group conversations without dominating them, the negotiation study where LLMs and humans extracted similar value but through completely different strategies, and why aggregate alignment metrics can be dangerously misleading. We also discuss the engineering challenges of synchronous online research, the video-game-lobby system they built to solve coordination problems, how a simple status indicator dramatically reduced participant attrition, and the unexpected finding that half their users didn't even want the AI features. This is a conversation about what happens when you put humans and AI in the same room and try to make collective decisions, and what we're learning about both. 0:00 - Introduction 0:50 - Why Prolific for behavioral research 2:45 - What is Deliberate Lab 3:52 - The tooling gap in group dynamics research 7:18 - Lost at Sea: gender bias in leadership election 9:49 - Measuring confidence and competence 12:59 - Gender as a coordination mechanism 16:02 - LLM simulacra of human participants 20:33 - Where LLM conversations break down 22:00 - Mirroring vs. normative modes in models 24:35 - Solving the synchronous coordination problem 27:02 - What went wrong in early deployments 30:23 - Unexpected use cases from the research community 33:52 - AI facilitation for consensus building 38:17 - The negotiation and trading study 44:58 - Why aggregate alignment metrics are misleading 47:00 - LLMs as participants vs. tools 50:37 - Can AI make group conversations better or worse 53:17 - Designing agents for organic group interaction 57:22 - Eating your own dog food 59:43 - How human attitudes toward AI are changing About the guest: Crystal Qian is a Senior Research Scientist at @googledeepmind, within the People + AI Research Group (PAIR). She leads a team investigating how LLMs can shape and improve social dynamics. Recent work includes simulating voting patterns in group elections, evaluating how LLM assistance can improve bargaining outcomes and group consensus, and developing scalable evaluation methods. Her current research interests involve human-AI interaction, agentic simulations, and societal impact, grounded through the analytical lens of game mechanics and behavioral experimentation. Read the Deliberate Lab paper on ArXiv: https://arxiv.org/pdf/2510.13011v1 Learn more about Deliberate Lab: https://deliberate-lab.appspot.com/#/ Get the quality human data you need for AI research and development: https://www.prolific.com/ai?utm_sourc... Connect with Prolific: 🔵 X: / prolific 🔵 LinkedIn: / prolific-com 🔵 Facebook: / joinprolific 🔵 Instagram: / joinprolific 🔵 Bluesky: https://bsky.app/profile/joinprolific... #ai #deepmind #prolific