Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Deep dive - Better Attention layers for Transformer models в хорошем качестве

Deep dive - Better Attention layers for Transformer models 1 год назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



Deep dive - Better Attention layers for Transformer models

The self-attention mechanism is at the core of transformer models. As amazing as it is, it requires a significant amount of computing and memory bandwidth, leading to scalability issues as models get more complex and context length increases. In this video, we'll quickly review the computation involved in the self-attention mechanism and its multi-head variant. Then, we'll discuss newer attention implementations focused on compute and memory optimizations, namely Multi-Query Attention, Group-Query Attention, Sliding Window Attention, Flash Attention v1 and v2, and Paged Attention. Slides: https://fr.slideshare.net/slideshow/j... ⭐️⭐️⭐️ Don't forget to subscribe to be notified of future videos. Follow me on Medium at   / julsimon   or Substack at https://julsimon.substack.com. ⭐️⭐️⭐️ 00:00 Introduction 03:00 Self-attention 07:20 Multi-Head Attention (MHA) 12:32 Multi-Query Attention (MQA) 18:45 Group-Query Attention (GQA) 22:47 Sliding Window Attention (SWA) 26:17 Flash Attention 31:28 Flash Attention v2 34:36 Paged Attention 39:00 The Hugging Face LLM performance leaderboard

Comments