У нас вы можете посмотреть бесплатно 1st Exercise, Optimization for Machine Learning, Sose 2023, LMU Munich или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
All teaching material is available at: [github](https://github.com/bengsV/OptML) This video is the first exercise session for the *Optimization for Machine Learning* course (Summer Semester 2023) at LMU Munich, led by Viktor Bengs. The session focuses on revisiting multivariate differentiation, which is a foundational tool for optimization in ML. *Key Topics Covered* #### *1. Review of Differentiation Rules [[02:21]( • 1st Exercise, Optimization for Machine Lea... )]* The instructor begins with a "toolbox" reminder of univariate rules and demonstrates how they extend to the multivariate (gradient) case: *Sum Rule:* The gradient of a sum is the sum of the gradients [[04:22]( • 1st Exercise, Optimization for Machine Lea... )]. *Product Rule:* Similar to the univariate case, the gradient of is [[07:16]( • 1st Exercise, Optimization for Machine Lea... )]. *Quotient Rule:* Applied when the denominator is non-zero [[12:19]( • 1st Exercise, Optimization for Machine Lea... )]. *Chain Rule:* Essential for composite functions, such as applying a univariate function to the output of a multivariate function [[01:15:27]( • 1st Exercise, Optimization for Machine Lea... )]. #### *2. Worked Mathematical Exercises [[24:21]( • 1st Exercise, Optimization for Machine Lea... )]* The session walks through calculating gradients and *Hessian matrices* for several specific functions: *Exponential of squared norm:* , where the chain rule is used to find the gradient and Hessian [[24:54]( • 1st Exercise, Optimization for Machine Lea... )]. *Log-sum-squared:* , demonstrating the quotient rule [[42:05]( • 1st Exercise, Optimization for Machine Lea... )]. *Sigmoid function:* A common function in logistic regression, where the instructor shows how to express the gradient in terms of the function itself: [[55:26]( • 1st Exercise, Optimization for Machine Lea... )]. *Jacobian Matrix:* A brief overview of computing the Jacobian for vector-valued functions [[01:11:10]( • 1st Exercise, Optimization for Machine Lea... )]. #### *3. Practical Implementation in PyTorch [[01:14:45]( • 1st Exercise, Optimization for Machine Lea... )]* The final part of the video transitions to a coding tutorial using **PyTorch**: *Tensors:* Introduction to the `torch.tensor` object and the `requires_grad=True` parameter [[01:17:33]( • 1st Exercise, Optimization for Machine Lea... )]. *Automatic Differentiation (`autograd`):* Demonstration of using the `.backward()` function to automatically compute derivatives without manual calculation [[01:21:25]( • 1st Exercise, Optimization for Machine Lea... )]. *Visualization:* Plotting functions and their first-order Taylor approximations (tangent lines) using Matplotlib [[01:24:55]( • 1st Exercise, Optimization for Machine Lea... )]. *Multivariate Examples:* Implementing the mathematical exercises from the first half of the session in Python to verify results [[01:26:39]( • 1st Exercise, Optimization for Machine Lea... )]. The video concludes by emphasizing that while manual calculation is important for understanding, tools like PyTorch handle these computations efficiently for complex machine learning models [[01:15:47]( • 1st Exercise, Optimization for Machine Lea... )].