У нас вы можете посмотреть бесплатно CSCI 3151 - M41 - Embeddings for images, text, and graphs или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This module zooms in on one especially powerful kind of representation: embeddings—vector spaces where geometry reflects meaning. We look at how modern systems turn messy objects like sentences, images, and graph nodes into points in a shared R^d space, so that simple operations (dot products, k-NN, linear probes) become surprisingly strong tools. Conceptually, we connect classic ideas like bag-of-words and TF–IDF to dense word embeddings (word2vec, GloVe, fastText) and then to contextual and multimodal embeddings that power current language and vision models. Along the way, we revisit the “objectives shape representations” theme from M40, showing how contrastive and classification losses carve up these spaces into clusters, manifolds, and directions that encode semantics, structure, and relationships. On the practical side, we work through two concrete case studies: exploring pre-trained word embeddings to visualize semantic neighborhoods and analogies, and using a frozen CNN (e.g., ResNet) as an image embedding extractor for CIFAR-10 to build simple k-NN and linear-probe classifiers in embedding space. We also briefly introduce graph embeddings (DeepWalk/node2vec, GNNs) to show how similar principles apply beyond text and images. Throughout, we emphasize evaluation and risk: how to sanity-check embedding quality with downstream tasks and visualizations, and how embeddings can encode social bias or leak sensitive information if used uncritically. By the end, students should be able to explain what an embedding is, use pre-trained embedding models as drop-in feature extractors, interpret basic diagnostics of embedding spaces, and articulate why these vector representations sit at the core of modern ML pipelines. Course module page: https://web.cs.dal.ca/~rudzicz/Teaching/CS...