У нас вы можете посмотреть бесплатно DALI 2017 - Workshop - Theory of Generative Adversarial Networks - Introduction или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
DALI 2017 Workshop on Theory of Generative Adversarial Networks http://dalimeeting.org/dali2017/gener... Organizers: Sebastian Nowozin (Microsoft Research) David Lopez-Paz (Facebook AI Research) In Generative Adversarial Networks (GANs), two machines learn together about a probability distribution P by pursuing competing goals. On the one hand, the generator transforms vectors of random noise into samples that resemble the distribution P, according to the scores of the discriminator. On the other hand, the discriminator distinguishes between real samples drawn from P and fake samples synthesized by the generator. After training ends, the generator estimates an implicit generative model of the distribution P, and the discriminator estimates the energy landscape of the data. Recent efforts have established connections between GAN training and f-divergence minimization, optimal transport, and energy-based learning. However, our theoretical understanding of GANs remains on its infancy, and many fascinating questions cry for an answer. How can we better understand the optimization dynamics of GANs? How can we evaluate the quality of a GAN? How to stabilize training of GANs? How to capture parameter uncertainty in the GAN framework, i.e. what is the analogue to the Bayesian neural network in the GAN setting? In this workshop, we will foster interesting discussions to ask ourselves these and many other questions.