• ClipSaver
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained) скачать в хорошем качестве

DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained) 4 years ago

deep learning

machine learning

arxiv

explained

neural networks

ai

artificial intelligence

paper

deep learning tutorial

what is deep learning

introduction to deep learning

facebook

facebook ai

fair

byol

swav

self supervised learning

unsupervised feature learning

unsupervised machine learning

feature engineering

stop gradient

dino

self distillation

self-distillation

segmentation maps

visual transformer

visual transformer self supervised

imagenet

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained) в качестве 4k

У нас вы можете посмотреть бесплатно DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained) в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained)

#dino #facebook #selfsupervised Self-Supervised Learning is the final frontier in Representation Learning: Getting useful features without any labels. Facebook AI's new system, DINO, combines advances in Self-Supervised Learning for Computer Vision with the new Vision Transformer (ViT) architecture and achieves impressive results without any labels. Attention maps can be directly interpreted as segmentation maps, and the obtained representations can be used for image retrieval and zero-shot k-nearest neighbor classifiers (KNNs). OUTLINE: 0:00 - Intro & Overview 6:20 - Vision Transformers 9:20 - Self-Supervised Learning for Images 13:30 - Self-Distillation 15:20 - Building the teacher from the student by moving average 16:45 - DINO Pseudocode 23:10 - Why Cross-Entropy Loss? 28:20 - Experimental Results 33:40 - My Hypothesis why this works 38:45 - Conclusion & Comments Paper: https://arxiv.org/abs/2104.14294 Blog:   / dino-paws-computer-vision-with-self-superv...   Code: https://github.com/facebookresearch/dino My Video on ViT:    • An Image is Worth 16x16 Words: Transf...   My Video on BYOL:    • BYOL: Bootstrap Your Own Latent: A Ne...   Abstract: In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base. Authors: Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, Armand Joulin Links: TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick YouTube:    / yannickilcher   Twitter:   / ykilcher   Discord:   / discord   BitChute: https://www.bitchute.com/channel/yann... Minds: https://www.minds.com/ykilcher Parler: https://parler.com/profile/YannicKilcher LinkedIn:   / yannic-kilcher-488534136   BiliBili: https://space.bilibili.com/1824646584 If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannick... Patreon:   / yannickilcher   Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n

Comments
  • V-JEPA: Revisiting Feature Prediction for Learning Visual Representations from Video (Explained) 1 year ago
    V-JEPA: Revisiting Feature Prediction for Learning Visual Representations from Video (Explained)
    Опубликовано: 1 year ago
    45420
  • Visualizing transformers and attention | Talk for TNG Big Tech Day '24 5 months ago
    Visualizing transformers and attention | Talk for TNG Big Tech Day '24
    Опубликовано: 5 months ago
    624193
  • Supervised Contrastive Learning 5 years ago
    Supervised Contrastive Learning
    Опубликовано: 5 years ago
    63299
  • How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile 2 years ago
    How AI Image Generators Work (Stable Diffusion / Dall-E) - Computerphile
    Опубликовано: 2 years ago
    998069
  • Harvard Professor Explains Algorithms in 5 Levels of Difficulty | WIRED 1 year ago
    Harvard Professor Explains Algorithms in 5 Levels of Difficulty | WIRED
    Опубликовано: 1 year ago
    4148969
  • MIT Introduction to Deep Learning | 6.S191 2 months ago
    MIT Introduction to Deep Learning | 6.S191
    Опубликовано: 2 months ago
    315257
  • An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained) 4 years ago
    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
    Опубликовано: 4 years ago
    365515
  • #55 Dr. ISHAN MISRA - Self-Supervised Vision Models 3 years ago
    #55 Dr. ISHAN MISRA - Self-Supervised Vision Models
    Опубликовано: 3 years ago
    23968
  • An introduction to Policy Gradient methods - Deep Reinforcement Learning 6 years ago
    An introduction to Policy Gradient methods - Deep Reinforcement Learning
    Опубликовано: 6 years ago
    229441
  • Transformers (how LLMs work) explained visually | DL5 1 year ago
    Transformers (how LLMs work) explained visually | DL5
    Опубликовано: 1 year ago
    6046500

Контактный email для правообладателей: [email protected] © 2017 - 2025

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS