У нас вы можете посмотреть бесплатно Speed up Your ML Workloads With Kubernetes Powered In-memory Data... Rasik Pandey & Akshay Chitneni или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Don't miss out! Join us at our next Flagship Conference: KubeCon + CloudNativeCon events in Hong Kong, China (June 10-11); Tokyo, Japan (June 16-17); Hyderabad, India (August 6-7); Atlanta, US (November 10-13). Connect with our current graduated, incubating, and sandbox projects as the community gathers to further the education and advancement of cloud native computing. Learn more at https://kubecon.io Speed up Your ML Workloads With Kubernetes Powered In-memory Data Caching - Rasik Pandey & Akshay Chitneni, Apple ML workloads require repetitive access to data for model training. This repetitive access can be both slow and costly in cloud environments further slowing down model training and leaving GPU resources idle waiting for data to load. As datasets and training workloads become larger and more sophisticated in the era of GenAI, efficient data access is crucial to improving training workload speed and efficiency. In this talk, we will discuss optimized data caching for ML workloads using Apache Iceberg, Apache Arrow Flight, and Kubernetes. We will demonstrate a distributed in-memory cache of an Iceberg table across a fleet of Kubernetes pods used to load data more efficiently into Kubeflow training workloads.