У нас вы можете посмотреть бесплатно Pie & AI: Vienna - Maximizing Inferencing Efficiency with Model Quantization или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Google's first-generation of TPUs, Tesla's Full-Self-Driving hardware, and NVIDIA's latest GPU architectures have one thing in common: they rely on few-bit integer operations to maximize the efficiency of inferencing a trained neural network. This talk will explore how network quantization translates a standard floating-point network into one that runs with integer-only computations. We will discuss what this means for the accuracy and other properties of the network. Finally, we will examine what we can already do during training to avoid a loss in accuracy when running a network with integer operations. SPEAKER Mathias Lechner is a third-year PhD student at IST Austria working with Prof. Thomas Henzinger. His research lies at the intersection of deep learning, trustworthy AI, and verification. The results of his research work have been published at pioneer AI venues, including NeurIPS, ICLR, ICML, and Nature Machine Intelligence. Before joining IST Austria, he has interned at MIT CSAIL, Daniela Rus' Lab. He received his MSc. and BSc. in Computer Engineering from the Vienna University of Technology (TU Wien), Austria, where his MSc. thesis received the Distinguished Young Alumnus-Award, at TU Wien's Faculty of Informatics. https://mlech26l.github.io/pages/about/ / mathias-lechner-4008b0154