У нас вы можете посмотреть бесплатно Unsupervised representation learning by ... | Lior Fox, Gatsby Computational Neuroscience Unit или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Van Vreeswijk Theoretical Neuroscience Seminar www.wwtns.online; on twitter: WWTNS@TheoreticalWide Wednesday, March 4 , 2026, at 11:00 am ET Lior Fox Gatsby Computational Neuroscience Unit Title: Unsupervised representation learning by amortised neural message-passing Abstract: Useful internal representations should explain the patterns of regularities and dependencies among observations. Probabilistic graphical models promise a principled way to uncover latent factors as such, but they are hard to scale to handle high-dimensional sensory observations and complicated dependencies structures. Neural-networks, on the other hand, excel at approximating complicated high-dimensional functions, but their internal representations do not easily lend themselves to a probabilistic interpretation. Despite some successes, a general unified approach is still missing for integrating the two approaches. I will describe a novel approach towards merging adaptive neural-network components into a probabilistic framework, based on three core ideas. The first is to train a set of networks to collectively perform inference, leveraging the ability of pattern-recognition methods to amortise complicated transformations. The second is to constrain the way in which the outputs of these networks are interpreted, transformed, and combined together. These constraints, together with the learning objective itself, are derived directly from probabilistic considerations encoded in a graphical model. Finally, the third core idea is that of recognition-parametrisation, allowing the inference ("recognition") procedure to directly define the model itself, without requiring an explicit "generative" decoder.