У нас вы можете посмотреть бесплатно How does Docker run machine learning on specialized AI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Find out what new things you can code by downloading the new version of Docker Desktop → https://dockr.ly/3R1KjXf The use of specialized processors for specialized tasks dates back to the 70s when CPUs were paired with coprocessors for floating-point calculation. Fast forward to today, most machine learning (ML) computations are run on a combination of CPUs and GPUs or specialized AI hardware accelerators such as AWS Inferentia and Intel Habana Gaudi. Docker and container technologies have become indispensable tools when it comes to scaling machine learning, but how does it work when you have more than one type of processor on the same system? Does Docker still guarantee all its benefits when working with different processors? In this session, I’ll discuss how Docker containers work on CPU-only systems and discuss how you can use them in heterogenous systems with multiple processors. Based on current trends and the introduction of newer AI silicon (GPUs, TPUs, GraphCore, AWS Tranium & Inferentia, Intel Habana Gaudi, and more), it’s evident that future machine learning workloads will run on multiple processors. Finally, I’ll discuss the role of containers in this future. • Speaker: Shashank Prasanna, Developer Advocate at AWS. Join Shashank on LinkedIn → / shashankprasanna Join Shashank on Twitter → / shshnkp -- Join the conversation! LinkedIn → https://dockr.ly/LinkedIn Twitter → https://dockr.ly/Twitter Facebook → https://dockr.ly/Facebook Instagram → https://dockr.ly/Instagram