У нас вы можете посмотреть бесплатно Denoising Diffusion Models: Generative Models of Modern Deep Learning Era (Arash Vahdat, NVIDIA) или скачать в максимальном доступном качестве, которое было загружено на ютуб. Для скачивания выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Date: Mar 17, 2023 Abstract: Diffusion models are revolutionizing the way we train deep generative models. Comprising a forward process that adds Gaussian noise iteratively to data and a reverse process that learns to generate data by denoising, these models exhibit exceptional sample quality and diversity. However, their iterative nature often results in slow sampling. In this talk, I will provide a brief overview of denoising diffusion models and highlight some of the successful frameworks we have recently developed at NVIDIA using these models, including text-to-image models, 3D shape models, and adversarially robust classification frameworks. Additionally, I will delve into the sampling challenges from diffusion models and introduce three frameworks we have created to address them. These include latent score-based generative models that train diffusion models in a latent space, denoising diffusion GANs that employ complex multimodal distributions for denoising, and higher-order solvers that solve the sampling differential equations in diffusion models in fewer steps. Bio: Arash Vahdat is a principal research scientist at NVIDIA research specializing in generative AI technologies. Before joining NVIDIA, he was a research scientist at D-Wave Systems where he worked on generative learning and its applications in efficient training. Before D-Wave, Arash was a research faculty member at Simon Fraser University (SFU), where he led deep learning-based video analysis research and taught master courses on machine learning for big data. Arash's current areas of research include generative learning, representation learning, and efficient deep learning.