У нас вы можете посмотреть бесплатно Convergence of Continuous-Time Stochastic Gradient Descent with Applications to Deep Neural Networks или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Convergence of Continuous-Time Stochastic Gradient Descent with Applications to Deep Neural Networks Speaker: Eulàlia Nualart, Universitat Pompeu Fabra - Barcelona School of Economics Abstract: This talk studies a continuous-time approximation of the stochastic gradient descent process for minimizing the population expected loss in learning problems. The main results establish general sufficient conditions for convergence, extending the results of Chatterjee (2022) established for (non-stochastic) gradient descent. Professor Nualart shows how the main result can be applied to the case of overparametrized neural network training. This is joint work with Gábor Lugosi (UPF). About the workshop: This talk was presented at "Mathematical Foundations of Machine Learning: PDEs, Probability, and Dynamics," held at the Centre de Recerca Matemàtica (CRM) in Barcelona, January 7-9, 2026. About the speaker: Eulàlia Nualart is a researcher at Universitat Pompeu Fabra and Barcelona School of Economics, working on probability theory and stochastic analysis with applications to machine learning. More information: https://www.crm.cat/mathematical-foun... #MachineLearning #Mathematics #AI #DeepLearning #NeuralNetworks #TheoreticalML #DataScience #AppliedMathematics #Research #AcademicTalk #CRM #Barcelona #MathematicalFoundations #ArtificialIntelligence