У нас вы можете посмотреть бесплатно Idan Mehalel - Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Presented on Thursday, February 15th, 2024, 10:30 AM, room C221 Speaker Idan Mehalel (Technion) Title Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension Abstract: Consider the following two classical online learning problems: (1) Suppose that n forecasters are providing daily rain/no-rain predictions, and the best among them is mistaken in at most k many days. For how many days will a person allowed to observe the predictions, but has no weather-forecasting prior knowledge, mispredict? (2) Suppose that a set of cat/dog classification functions for a set of cat/dog pictures is given, and the best function misclassifies at most k many pictures. How many mistakes will be made on an online stream of such pictures, by an eye-covered person who knows the functions class? In this talk, we will discuss how such classical problems can be reduced to calculating the (average) depth of binary trees, by using newly introduced complexity measures (aka dimensions) of the set of experts/functions. All of those measures are variations of the classical Littlestone dimension (Littlestone ’88). Specifically, for problems (1), (2) described above we obtain the following results: (1) In the forecasters setting, Cesa-Bianchi, Freund, Helmbold, and Warmuth [’93, ’96] provided a nearly optimal bound for deterministic learners, and left the randomized case as an open problem. We resolve this problem by providing an optimal randomized learner, and showing that its expected mistake bound equals half of the deterministic bound of Cesa-Bianchi et al., up to negligible additive terms. (2) For general classification functions, We show that the optimal expected regret (= #mistakes - k) when learning a function class with Littlestone dimension d is of order d + (kd)^0.5. Based on a joint work with Yuval Filmus, Steve Hanneke, and Shay Moran. Bio: Idan Mehalel is a PhD student at the computer science department at the Technion, where he is advised by Yuval Filmus and Shay Moran. He works primarily on learning theory, with recent focus on online learning theory. Link for the Panopto Meetinghttps://huji.cloud.panopto.eu/Panopto... Link to past lectures / @hujimachinelearningclub8982 Online Calendar Learning Club @ HUJI https://www.google.com/calendar/embed... Calendar ID: [email protected] Mailing List subscription-manager: http://mailman.cs.huji.ac.il/mailman/...