У нас вы можете посмотреть бесплатно Eugenia Rho - Evaluation and Design Challenges for Human-Centered NLP или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Talk Title: When Language Models Adapt, Teach, and Train: Evaluation and Design Challenges for Human-Centered NLP Abstract: Large language models are increasingly positioned as personal advisors, collaborators, and coaches. As the role of AI evolves from text generators to interactive agents embedded in everyday life, the central question shifts from what language models can generate to what generated language does in context, raising new challenges for HCI x NLP research. How can AI agents adapt to users without stereotyping them? How do different human-AI team structures change how agent suggestions are interpreted, discussed, and used by humans? And how do agent suggestions shape what humans do, decide, and learn over time? This talk draws on three case studies to explore these questions. (1) Personalization and bias. We develop a stereotype-grounded audit methodology to study disclosure-based personalization in LLM advice. Across multiple models, identity disclosure systematically shifts recommendations toward more risk-averse or stereotype-aligned decisions. These results complicate standard bias evaluation pipelines by showing that behavioral differences in models cannot be interpreted without user context and expectations, motivating interaction-aware audit frameworks. (2) Learning with LLMs. In a controlled pair-programming study, we compare human–AI pairs with human–human–AI triads. When working alone with AI, participants frequently incorporated its suggestions into their solutions. When collaborating with a human peer alongside AI in the human–human–AI (HHAI) condition, participants became far more selective, discussing and filtering suggestions before using them. These findings show that how model suggestions are taken up, and how people talk around them, depends strongly on the human–AI teaming structure and points to the importance of examining not only what models generate but how their suggestions are used in practice. (3) AI-mediated communication training. We study LLM-based communication coaching systems that generate role-play scenarios and feedback for workplace conversations. These systems raise open questions for NLP about intent modeling, longitudinal adaptation, and how to evaluate models when success involves longer-term changes in user behavior rather than immediate task accuracy. Together, these cases highlight emerging challenges for HCI x NLP as agents are used to advise, teach, and train humans. They point to the need for personalization that adapts without stereotyping, evaluation approaches that go beyond static outputs, and a better understanding of how generated language shapes user decisions and learning over time. To checkout other talks in our full NLP Seminar Series, please visit: • UCLA NLP Seminar Series