У нас вы можете посмотреть бесплатно Scalable Bayesian Inference in Low-Dimensional Subspaces или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Slides: https://bayesgroup.github.io/bmml_sem... Pavel Izmailov and Polina Kirichenko, New York University Bayesian methods can provide full-predictive distributions and well-calibrated uncertainties in modern deep learning. However, scaling Bayesian inference techniques to deep neural networks (DNNs) is challenging due to the high dimensionality of the parameter space. In this talk, we will discuss two recent papers on scalable Bayesian inference which share a similar high-level idea: performing approximate inference in low-dimensional subspaces of DNNs parameter space. In Subspace Inference for Bayesian Deep Learning [1], we propose to exploit the geometry of DNN training objectives to construct low-dimensional subspaces that contain diverse sets of models. In these subspaces, we are able to apply a wide range of advanced approximate inference methods, such as elliptical slice sampling and variational inference, that struggle in the full parameter space. We show that Bayesian model averaging over the induced posterior in these subspaces leads to strong performance in terms of accuracy and uncertainty quantification on regression and image classification tasks. In Projected BNNs [2], the authors propose a variational inference framework for Bayesian neural networks that (1) encodes complex distributions in high-dimensional parameter space with representations in a low-dimensional latent space, and (2) performs inference efficiently on the low-dimensional representations.