У нас вы можете посмотреть бесплатно Natalie Collina: Learning and Incentives in Human -AI Collaboration (February 6, 2026) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
As AI systems become more capable, a central challenge is designing them to work effectively with humans. Natalie Collina will first consider collaborative prediction, motivated by a doctor consulting an AI that shares the goal of accurate diagnosis. Even when the doctor and AI have only partial and incomparable knowledge, repeated interaction enables richer forms of collaboration: we give distribution-free guarantees that their combined predictions are strictly better than either alone, with regret bounds against benchmarks defined on their joint information. Natalie Collina will then revisit the alignment assumption itself. If an AI is developed by, say, a pharmaceutical company with its own incentives, how can we encourage helpful behavior? A natural scenario is that the doctor has access to multiple models, each from a different provider. Under a milder “market alignment” assumption—that the doctor’s utility lies in the convex hull of the providers’ utilities—we show that in Nash equilibrium of this competition, the doctor can achieve the same outcomes as if a perfectly aligned provider were present. Based on joint work: Tractable Agreement Protocols (STOC’25), Collaborative Prediction (SODA’26), and Emergent Alignment via Competition (in submission). For more information, please visit: https://www.simonsfoundation.org/even...