У нас вы можете посмотреть бесплатно Math You Need for Machine Learning | Intro to Language Models & Embeddings или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Let's walk through the most famous language model of all — Word2Vec. We'll build an intuition for how language modelling works, and how words are embedded to represent meanings. Then, we'll connect what we learned to Large Language Models of today. We'll introduce key concepts like contrastive learning, self-supervised learning, autoregressive models, and transfer learning. Beyond words, we'll also explore how embeddings can bridge together language and vision. We'll cover CLIP, which is surprisingly similar to Word2Vec! Finally, we'll see how embeddings can be used to morph images together through Neural Style Transfer. We'll continue to build an understanding of these concepts and models in future videos — so stay tuned! Presentation with links and extra details: https://docs.google.com/presentation/... TensorFlow embedding projector for exploring embeddings visually: https://projector.tensorflow.org TensorFlow tutorial for coding Word2Vec from scratch: https://www.tensorflow.org/text/tutor... 00:00 Introduction 00:43 Language Models: predicting neighbouring words 02:02 Word embeddings & word similarities 04:24 Softmax: scoring competing possibilities 05:37 Word2Vec version one: smart, but slow 06:29 A change of perspective: classifying real vs fake word pairs 07:34 Training procedure walkthrough. Negative Sampling 08:38 Sigmoid: making classifications 10:19 Objective of Word2Vec 10:52 Visualization of embeddings 11:29 CLIP: connecting text and images. Contrastive Learning 12:23 Objective of CLIP. A familiar objective! 12:47 Connection to Large Language Models 13:32 Self-supervised learning 14:00 The power of predicting the next token 14:32 Autoregressive models 15:10 Transfer learning 15:53 Representations in deep neural networks 16:03 Morphing images through Neural Style Transfer 17:17 Conclusion