У нас вы можете посмотреть бесплатно Revisiting PAC Learning - Kasper Green Larsen или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
ABSTRACT: PAC learning is one of the most fundamental theoretical models in which machine learning is studied. Despite the wealth of results in this model, optimal learning algorithms remained absent until break-through work by Steve Hanneke in 2016 on so-called realizable PAC learning. His algorithm is however quite complicated and somewhat unnatural. In this talk, I will present a sequence of results from the past few years that give alternative optimal learning algorithms. We’ll first see that the popular practical heuristic Bagging, dating back to Breiman 1996, also provides an optimal realizable PAC learner. We will then see that if one is satisfied with an optimal in-expectation error, then an extremely simple Majority-of-3 algorithm suffices. Finally, we will turn to the agnostic PAC learning setup and give the first optimal algorithm. These results are based on papers at COLT’23 (Best Paper), COLT’24 and FOCS’24. BIOGRAPHY: Kasper Green Larsen is a Professor at the Computer Science Department, Aarhus University, Denmark. He heads the Section on Algorithms, Data and Artificial Intelligence which includes 50+ researchers spanning from theoretical computer science to practice of machine learning. Larsen is an EATCS Fellow, a recipient of the EATCS Presburger Award as well as Best Paper Awards at COLT, CRYPTO, STOC and Best Student Paper Awards at STOC and FOCS. His research contributions span many areas of theoretical computer science, with an emphasis on machine learning theory, lower bounds, algorithms, data structures and applications in cryptography.