У нас вы можете посмотреть бесплатно Lecture 05 - Quantization (Part I) | MIT 6.S965 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Lecture 5 introduces neural network quantization. In this lecture, we review the numeric data types in modern computing systems and introduce K-means-based quantization and linear quantization. Keywords: Neural Network Quantization, Quantization, K-Means-Based-Quantization, Linear Quantization Slides: https://efficientml.ai/schedule/ -------------------------------------------------------------------------------------- TinyML and Efficient Deep Learning Computing Instructors: Song Han: https://songhan.mit.edu Have you found it difficult to deploy neural networks on mobile devices and IoT devices? Have you ever found it too slow to train neural networks? This course is a deep dive into efficient machine learning techniques that enable powerful deep learning applications on resource-constrained devices. Topics cover efficient inference techniques, including model compression, pruning, quantization, neural architecture search, and distillation; and efficient training techniques, including gradient compression and on-device transfer learning; followed by application-specific model optimization techniques for videos, point cloud, and NLP; and efficient quantum machine learning. Students will get hands-on experience implementing deep learning applications on microcontrollers, mobile phones, and quantum machines with an open-ended design project related to mobile AI. Website: http://efficientml.ai/