У нас вы можете посмотреть бесплатно CSCI 3151 - M26 - Linear dimensionality reduction & PCA или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This module introduces linear dimensionality reduction via Principal Component Analysis (PCA) as a way to compress high-dimensional data while retaining as much structure as possible. We start from a data-matrix view of PCA, building up the sample mean, centered data matrix, and covariance matrix, and use these to define variance along a direction in feature space. We then frame PCA as an optimization problem—choosing unit-length directions that maximize projected variance—and solve it with Lagrange multipliers to show that principal components are eigenvectors of the covariance matrix. A stretch section connects this to the singular value decomposition (SVD) and the Eckart–Young theorem, explaining PCA as the best rank-k approximation in Frobenius norm and giving an intuitive “energy” picture of singular values. On the practical side, we work through a 2D toy example to visualize projections, orthogonal reconstruction, and how PCA directions align with the main axes of variation. A larger worked example on handwritten digits compares logistic regression on raw pixel features versus PCA-compressed features at different dimensions, using test accuracy and explained variance to highlight tradeoffs between compression and performance. The module also covers where PCA belongs in the pipeline (fit on train data only), the role of centering and scaling, sensitivity to outliers, and how unsupervised dimensionality reduction can obscure interpretability or entangle sensitive information. We close by situating PCA in the broader landscape—kernel PCA, autoencoders and VAEs, and visualization tools like t-SNE and UMAP—emphasizing that PCA is both a core linear tool and a conceptual stepping stone to modern representation learning. By the end, students should be able to derive and interpret PCA’s optimization objective, implement PCA in Python, choose a reasonable number of components, and critically assess when PCA is helping or hurting downstream models. Course module page: https://web.cs.dal.ca/~rudzicz/Teaching/CS...