У нас вы можете посмотреть бесплатно Statistical mechanics of deep learning by Surya Ganguli или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Statistical Physics Methods in Machine Learning DATE: 26 December 2017 to 30 December 2017 VENUE: Ramanujan Lecture Hall, ICTS, Bengaluru The theme of this Discussion Meeting is the analysis of distributed/networked algorithms in machine learning and theoretical computer science in the "thermodynamic" limit of large number of variables. Methods from statistical physics (eg various mean-field approaches) simplify the performance analysis of these algorithms in the limit of many variables. In particular, phase-transition like phenomena appear where the performance can undergo a discontinuous change as an underlying parameter is continuously varied. A provocative question to be explored at the meeting is whether these methods can shed theoretical light into the workings of deep networks for machine learning. The Discussion Meeting will aim to facilitate interaction between theoretical computer scientists, statistical physicists, machine learning researchers and mathematicians interested in these questions. Examples of specific topics to be covered include (but are not limited to) problems such as phase transitions in optimization and learning algorithms, matrix approximation, mixing in large networks, sub-linear time algorithms, learning theory and non convex optimization. The meeting will allow structured and and unstructured interactions among the participants around the main theme. CONTACT US: spmml2017@icts.res.in PROGRAM LINK: https://www.icts.res.in/discussion-me... Table of Contents (powered by https://videoken.com) 0:00:00 Start 0:00:11 Statistical Physics of Deep Learning 0:01:00 Motivations for an alliance between theoretical neuroscience and theoretical machine learning 0:02:19 Talk Outline: from physics to better machine learning algorithms 0:04:59 Statistical mechanics of high dimensional data analysis 0:06:45 Statistical mechanics of complex neural systems and high dimensional data 0:07:28 Optimal inference in high dimensions 0:18:54 Talk Outline: from physics to better machine learning algorithms 0:20:47 High dimensional nonconvex optimization 0:22:06 General properties of error landscapes in high dimensions 0:34:12 Properties of Error Landscapes on the Synaptic Weight Space of a Deep Neural Net 0:35:27 How to descend saddle points 0:38:30 Performance of saddle free Newton in learning deep neural networks. 0:46:13 Learning deep generative models by reversing diffusion 0:46:27 Goal: achieve highly flexible but also tractable probabilistic generative models of data 0:47:46 Physical Intuition: Destruction of Structure through Diffusion 0:48:05 Physical Intuition: Recover Structure by Reversing Time 0:49:10 Swiss Roll 0:51:07 Dead Leaf Model 0:57:09 Natural Images 1:03:43 A key idea: solve the mixing problem during learning 1:06:30 A theory of deep neural expressivity through transient input-output chaos 1:06:42 The problem of expressivity 1:08:25 Some prior work on expressivity in neural nets 1:11:57 A maximum entropy ensemble of deep random networks 1:13:45 Emergent, deterministic signal propagation in random neural networks 1:14:42 Propagation of a single point through a deep network 1:16:05 Propagation of two points through a deep network 1:16:27 A theory of correlation propagation in a deep network 1:16:33 Propagation of correlations through a deep network 1:18:51 Propagation of a manifold through a deep network 1:20:18 Riemannian geometry I: Euclidean length 1:20:44 Riemannian geometry II: Extrinsic Gaussian Curvature 1:21:35 Riemannian geometry Ill: The Gauss map and Grassmannian length 1:21:39 An example: the great circle 1:22:43 Theory of curvature propagation in deep networks 1:23:53 Curvature propagation: theory and experiment 1:23:58 Exponential expressivity is not achievable by shallow nets 1:24:45 Boundary disentangleng: theory 1:24:49 Summary 1:25:02 Learning speed: with orthogonal weights, Sigmoidal can outperform ReLu 1:25:23 References 1:25:29 Q&A