У нас вы можете посмотреть бесплатно How t-SNE works? | AI ML tutorials by a Data Scientist | Thinking Neuron или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
https://thinkingneuron.com/data-scien... T-SNE stands for “t-distributed Stochastic Neighbor Embedding”. This is another dimensionality reduction technique primarily aimed at visualizing data. Since plotting a high dimension data (e.g 100 columns) is computationally intense as well as difficult to understand. If the same data can be represented in 2 or 3 dimensions, it can be plotted easily and the data patterns can be understood. This can also be done using PCA, but, it is a linear method. If the relationship between the features is non-linear, then PCA will fail to provide good components that are efficient in explaining the variance. This is where t-SNA comes into the picture. t-SNE can find the non-linear relations between features and represent them in a lower dimension. t-SNE works by finding a probability distribution in low dimensional space “similar” to the distribution of original high dimensional data.