У нас вы можете посмотреть бесплатно How AI Learns From Mistakes (The Math Behind AI Ep4) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
When an AI makes a wrong prediction, an error signal races backward through billions of connections — and every single weight learns exactly how much it contributed to the failure. This is backpropagation, and it's nothing more than the chain rule from calculus, organized in the most ingenious way. In Episode 4 of The Math Behind AI, we build backpropagation from the ground up. Starting with computation graphs — the hidden scaffolding behind every neural network — we trace a complete forward pass, then reverse direction to propagate gradients backward using the chain rule. We prove every derivative with numerical verification, reveal why backpropagation is nearly a trillion times faster than the naive approach (dynamic programming in disguise), and show how the full training loop — forward, backward, update — connects everything from Episode 2's gradient descent to Episode 3's Bayesian priors. 📑 CHAPTERS 0:00 The Error Signal — Hook 0:30 Computation Graphs 1:27 The Forward Pass 2:11 The Chain Rule 3:24 The Backward Pass 4:39 Why Backpropagation Is Efficient 5:52 Scaling to Real Neural Networks 6:52 The Training Loop 7:53 Recap & Episode 5 Preview 📌 Series Playlist: • The Math Behind AI — From Vectors to Trans... 📌 Previous Episode: EP3 — • Why a 99% Accurate Test Is Wrong (The Math... 📌 Next Episode: EP5 — Softmax & Cross-Entropy (Coming Soon) #TheMathBehindAI #Backpropagation #MachineLearning