У нас вы можете посмотреть бесплатно Tilted Losses in Machine Learning: Theory and Applications to Federated Learning, By Tian Li или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
FIU solid lab’s Federated Education (FeDucation) Webinar Series Tian Li Tilted Losses in Machine Learning: Theory and Applications to Federated Learning Related Publications: [1] Tian Li, Ahmad Beirami, Maziar Sanjabi, and Virginia Smith. "On tilted losses in machine learning: Theory and applications." Journal of Machine Learning Research 24 (2023): 1-79. https://www.jmlr.org/papers/v24/21-10... [2] Tian Li, Ahmad Beirami, Maziar Sanjabi, and Virginia Smith. "Tilted Empirical Risk Minimization." In International Conference on Learning Representations. 2020. https://openreview.net/forum?id=K5Yas... [3] Tian Li, Maziar Sanjabi, Ahmad Beirami, and Virginia Smith. "Fair Resource Allocation in Federated Learning." In International Conference on Learning Representations. 2019. https://openreview.net/forum?id=ByexE... Abstract: Exponential tilting is a technique commonly used to create parametric distribution shifts. Despite its prevalence in related fields, tilting has not seen widespread use in machine learning. In this talk, I discuss a tilted empirical risk minimization (TERM) framework, which uses exponential tilting to flexibly tune the impact of individual losses. I make connections between TERM and related approaches, such as Value-at-Risk, Conditional Value-at-Risk, and distributionally robust optimization, and present batch and stochastic first-order optimization methods for solving TERM at scale. Finally, I show that this approach can be used for a multitude of applications in machine learning, such as enforcing fairness between subgroups, mitigating the effect of outliers, and handling class imbalance---delivering state-of-the-art performance relative to more complex, bespoke solutions for these problems. This talk is based on our recent JMLR paper https://www.jmlr.org/papers/v24/21-10.... Bio: Tian Li is receiving her Ph.D. degree in Computer Science at Carnegie Mellon University working with Virginia Smith. She will join the Computer Science Department and Data Science Institute at the University of Chicago as an Assistant Professor in 2024. Her research interests are in distributed optimization, federated learning, and trustworthy ML. Prior to CMU, she received her undergraduate degrees in Computer Science and Economics from Peking University. She received the Best Paper Award at the ICLR Workshop on Security and Safety in Machine Learning Systems, was invited to participate in the EECS Rising Stars Workshop, and was recognized as a Rising Star in Machine Learning/Data Science by multiple institutions. #federatedlearning #machinelearning www.solidlab.network