У нас вы можете посмотреть бесплатно Distributed Training at Scale или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
As deep learning models grow in complexity, particularly with the rise of Large Language Models (LLMs) and generative AI, scalable and cost-effective training has become a critical challenge. This talk introduces Ray Train, an open-source, production-ready library built for seamless distributed deep learning. We will explore its architecture, advanced resource scheduling, and intuitive APIs that simplify integration with popular frameworks such as PyTorch, Lightning, and HuggingFace. Attendees will leave with a clear understanding of how Ray Train accelerates large-scale model training while ensuring reliability and efficiency in production environments. Learn More Ray - https://www.ray.io/ Anyscale - https://www.anyscale.com/ About the Speaker Suman Debnath is a Technical Lead (ML) at Anyscale, where he focuses on distributed training, fine-tuning, and inference optimization at scale on the cloud. His work centers around building and optimizing end-to-end machine learning workflows powered by distributed computing framework like Ray, enabling scalable and efficient ML systems.