У нас вы можете посмотреть бесплатно The rise of Neural Architecture Search (NAS) and its limitations - Yonatan Geifman, Deci AI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This lecture was part of the AutoML conference, organized by the MDLI community. Link: https://bit.ly/AutoMLConf Fast and accurate deep neural networks (DNNs) are key for successfully solving and deploying commercial AI applications. A wide and growing range of new exciting applications can be built upon deep learning models, as they become increasingly large and more accurate. However, the computational costs of operating DNNs can also be very high, placing a ceiling on the cost-effectiveness of DNN inference. Another related but distinct obstacle is the need to deploy strong DNNs on edge devices that have limited computing power. If DNNs are to achieve affordable inference costs or be deployed on edge devices, they must be made computationally efficient while retaining their accuracy and robustness. To achieve lightweight-but-accurate DNNs, DNN architectures will need to be designed for specific AI chips while taking into consideration all available inference acceleration techniques, including compilation and quantization. Developing such neural designs requires a very rare skill set, which very few commercial parties possess. Neural architecture search (NAS) is a potentially viable approach to creating such models. In this 15-min lecture, you will learn more about NAS, its limitations, and if it can be applied in commercial applications.