У нас вы можете посмотреть бесплатно The Trimmed Lasso: Sparse Recovery Guarantees And Practical Optimization или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Boaz Nadler (Weizmann Institute of Science) https://simons.berkeley.edu/talks/tri... Statistics in the Big Data Era Consider the sparse approximation or best subset selection problem: Given a vector y and a matrix A, find a k-sparse vector x that minimizes the residual ||Ax-y||. This sparse linear regression problem, and related variants, plays a key role in high dimensional statistics, compressed sensing, and more. In this talk we focus on the trimmed lasso penalty, defined as the L_1 norm of x minus the L_1 norm of its top k entries in absolute value. We advocate using this penalty by deriving sparse recovery guarantees for it, and by presenting a practical approach to optimize it. Our computational approach is based on the generalized soft-min penalty, a smooth surrogate that takes into account all possible k-sparse patterns. We derive a polynomial time algorithm to compute it, which in turn yields a novel method for the best subset selection problem. Numerical simulations illustrate its competitive performance compared to current state of the art.