У нас вы можете посмотреть бесплатно Lots of self-supervised learning, SpeechBrain, TimeSformer and NVidia’s GTC speakers или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
For the full experience, and for links to all referenced content, visit our website: https://lamaai.io Yann LeCun on Self-Supervised Learning Yann LeCun, chief AI scientist at Facebook and creator of the CNN collaborates with research scientist Ishan Misra on a blog post titled Self-Supervised Learning: The Dark Matter of Intelligence. They discuss the limitations of supervised learning and how we can look towards leveraging self-supervised techniques to scale AI towards general intelligence. Their blog post contains an overview of many different self-supervised methods. Non-exhaustively, Siamese networks, contrastive methods, and non-constrastive methods are a few of the techniques they discuss. More self-supervised learning! Continuing on the topic of self-supervised learning, 2 new SoTA self-supervised learning libraries have been updated/released. The first of which is vissl.ai, which is a first party Facebook Research self-supervised vision library. Its modular and YAML-based principles give the tool ambitions to accelerate the research cycle in self-supervised learning - from designing a new self-supervised task to evaluating learned representations. Secondly, we have a more community sourced library by twitter user @KeremTurgutlu: self_supervised. self_supervised is also a self-supervised vision library which implements popular SoTA self-supervised learning algorithms as fast.ai callbacks SpeechBrain is released SpeechBrain is an open-source toolkit designed to speedup research and development of speech technologies. It is flexible, modular, easy-to-use and well documented. The tool supports many tasks such as speech recognition, speaker recognition, and multi-microphone signal processing (and many more). They feature tutorials on their website, some of which are aimed at beginners - so if you’re interested in getting into signal processing, check them out here! FAIR introduces TimeSformer TimeSformer, by Facebook AI Research (or FAIR) is a convolution-free/purely Transformer-based architecture for video processing. The algorithm sets new benchmarks in Kinectics-400 and Kinectics-600 (action recognition task) while being 3 times faster to train and using only 1/10th of the compute. Their methodology involves running a self-attention mechanism in time and space for a given patch. Essentially, consider an individual video frame (basically an image). The frame can be broken up into fixed size patches. The temporal self-attention mechanism is responsible for running attention for a given patch in the frame across the same patch location in all the other video frames, while the spatial self-attention mechanism is responsible for running attention for the given patch across all the other patches in the same frame. Yoshua Bengio, Yann LeCun and Geoffrey Hinton are keynote speakers The "deep learning fathers", Yoshua Bengio, Yann LeCun and Geoffrey Hinton have been invited to give talks and NVidia’s GTC21 conference. GTC is a free online conference and happening April 12-16 with live sessions across the world. No registration required to view the keynote speakers - find out more information here