У нас вы можете посмотреть бесплатно Improving Human-AI Collaboration by Adapting to User Trust или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Date Presented: 9/30/2025 Speaker: Tejas Srinivasan Visit links below to subscribe and for details on upcoming seminars: https://www.isi.edu/isi-seminar-series https://www.isi.edu/events Abstract: With the proliferation of AI assistants in many high stakes and safety-critical decision-making tasks, it is important to understand what factors modulate how people rely on AI assistance. One important factor is the user's trust in the AI assistant, with low and high levels of trust resulting in users ignoring accurate AI advice (under-reliance) and accepting incorrect AI advise (over-reliance), respectively. We propose that AI assistants should adapt their behavior through trust-adaptive interventions to mitigate such inappropriate reliance. For instance, when user trust is low, providing an explanation can elicit more careful consideration of the assistant's advice by the user. In two decision-making scenarios -- laypeople answering science questions and doctors making medical diagnoses -- we find that providing supporting and counter-explanations during moments of low and high trust, respectively, yields up to 38% reduction in inappropriate reliance and 20% improvement in decision accuracy. We are similarly able to reduce over-reliance by adaptively inserting forced pauses to promote deliberation when users have high trust in the AI assistant. Our results highlight how AI adaptation to user trust can facilitate appropriate reliance, presenting exciting avenues for improving human-AI collaboration. Speaker's Bio: Tejas is a fifth-year PhD student in the Computer Science department at USC, advised by Prof. Jesse Thomason in the GLAMOR Lab. His research explores how human-centered design of AI systems can boost human-AI collaboration, especially in situations characterized by uncertainty.