• ClipSaver
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

Building makemore Part 3: Activations & Gradients, BatchNorm скачать в хорошем качестве

Building makemore Part 3: Activations & Gradients, BatchNorm 2 years ago

neural network

deep learning

makemore

batchnorm

batch normalization

pytorch

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Building makemore Part 3: Activations & Gradients, BatchNorm
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: Building makemore Part 3: Activations & Gradients, BatchNorm в качестве 4k

У нас вы можете посмотреть бесплатно Building makemore Part 3: Activations & Gradients, BatchNorm или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон Building makemore Part 3: Activations & Gradients, BatchNorm в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



Building makemore Part 3: Activations & Gradients, BatchNorm

We dive into some of the internals of MLPs with multiple layers and scrutinize the statistics of the forward pass activations, backward pass gradients, and some of the pitfalls when they are improperly scaled. We also look at the typical diagnostic tools and visualizations you'd want to use to understand the health of your deep network. We learn why training deep neural nets can be fragile and introduce the first modern innovation that made doing so much easier: Batch Normalization. Residual connections and the Adam optimizer remain notable todos for later video. Links: makemore on github: https://github.com/karpathy/makemore jupyter notebook I built in this video: https://github.com/karpathy/nn-zero-t... collab notebook: https://colab.research.google.com/dri... my website: https://karpathy.ai my twitter:   / karpathy   Discord channel:   / discord   Useful links: "Kaiming init" paper: https://arxiv.org/abs/1502.01852 BatchNorm paper: https://arxiv.org/abs/1502.03167 Bengio et al. 2003 MLP language model paper (pdf): https://www.jmlr.org/papers/volume3/b... Good paper illustrating some of the problems with batchnorm in practice: https://arxiv.org/abs/2105.07576 Exercises: E01: I did not get around to seeing what happens when you initialize all weights and biases to zero. Try this and train the neural net. You might think either that 1) the network trains just fine or 2) the network doesn't train at all, but actually it is 3) the network trains but only partially, and achieves a pretty bad final performance. Inspect the gradients and activations to figure out what is happening and why the network is only partially training, and what part is being trained exactly. E02: BatchNorm, unlike other normalization layers like LayerNorm/GroupNorm etc. has the big advantage that after training, the batchnorm gamma/beta can be "folded into" the weights of the preceeding Linear layers, effectively erasing the need to forward it at test time. Set up a small 3-layer MLP with batchnorms, train the network, then "fold" the batchnorm gamma/beta into the preceeding Linear layer's W,b by creating a new W2, b2 and erasing the batch norm. Verify that this gives the same forward pass during inference. i.e. we see that the batchnorm is there just for stabilizing the training, and can be thrown out after training is done! pretty cool. Chapters: 00:00:00 intro 00:01:22 starter code 00:04:19 fixing the initial loss 00:12:59 fixing the saturated tanh 00:27:53 calculating the init scale: “Kaiming init” 00:40:40 batch normalization 01:03:07 batch normalization: summary 01:04:50 real example: resnet50 walkthrough 01:14:10 summary of the lecture 01:18:35 just kidding: part2: PyTorch-ifying the code 01:26:51 viz #1: forward pass activations statistics 01:30:54 viz #2: backward pass gradient statistics 01:32:07 the fully linear case of no non-linearities 01:36:15 viz #3: parameter activation and gradient statistics 01:39:55 viz #4: update:data ratio over time 01:46:04 bringing back batchnorm, looking at the visualizations 01:51:34 summary of the lecture for real this time

Comments
  • Building makemore Part 4: Becoming a Backprop Ninja 2 years ago
    Building makemore Part 4: Becoming a Backprop Ninja
    Опубликовано: 2 years ago
    263408
  • Deep Dive into LLMs like ChatGPT 3 months ago
    Deep Dive into LLMs like ChatGPT
    Опубликовано: 3 months ago
    2430013
  • The spelled-out intro to language modeling: building makemore 2 years ago
    The spelled-out intro to language modeling: building makemore
    Опубликовано: 2 years ago
    877100
  • Andrej Karpathy: This Is Elon Musk's Secret To Success. 1 year ago
    Andrej Karpathy: This Is Elon Musk's Secret To Success.
    Опубликовано: 1 year ago
    218683
  • 3-HOUR STUDY WITH ME | Hyper Efficient, Doctor, Focus Music, Deep Work, Pomodoro 50-10 2 months ago
    3-HOUR STUDY WITH ME | Hyper Efficient, Doctor, Focus Music, Deep Work, Pomodoro 50-10
    Опубликовано: 2 months ago
    2885721
  • Transformers: The best idea in AI | Andrej Karpathy and Lex Fridman 2 years ago
    Transformers: The best idea in AI | Andrej Karpathy and Lex Fridman
    Опубликовано: 2 years ago
    405939
  • But what is quantum computing?  (Grover's Algorithm) 9 days ago
    But what is quantum computing? (Grover's Algorithm)
    Опубликовано: 9 days ago
    1117957
  • Andrej Karpathy's Keynote & Winner Pitches at UC Berkeley AI Hackathon 2024 Awards Ceremony 10 months ago
    Andrej Karpathy's Keynote & Winner Pitches at UC Berkeley AI Hackathon 2024 Awards Ceremony
    Опубликовано: 10 months ago
    136497
  • Forest Cafe Jazz Music | Morning Tranquill Jazz With Nature Therapy For Stress Relief, Study & Wo... 5 months ago
    Forest Cafe Jazz Music | Morning Tranquill Jazz With Nature Therapy For Stress Relief, Study & Wo...
    Опубликовано: 5 months ago
    8306833
  • I Built a Neural Network from Scratch 10 months ago
    I Built a Neural Network from Scratch
    Опубликовано: 10 months ago
    755179

Контактный email для правообладателей: [email protected] © 2017 - 2025

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS