У нас вы можете посмотреть бесплатно From I/O-Bound to Compute-Bound: Why Ray Became a Distributed Computing Engine for Modern AI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Modern AI workloads changed the fundamental bottleneck in software systems. For years, most applications were limited by I/O - reading from databases, writing to storage, waiting on network latency. But AI systems are different. Training jobs, evaluation pipelines, reinforcement learning loops, and LLM inference all trigger heavy computation across CPUs and GPUs. The bottleneck has shifted from moving data to coordinating compute. Modern AI systems increasingly combine multimodal data (text, images, audio, video) and run across heterogeneous hardware environments that include CPUs, GPUs, and accelerators. These workloads are dynamic, long-running, and failure-sensitive. Coordinating this compute efficiently requires more than traditional cloud orchestration — it requires a distributed execution model designed for compute-heavy systems. In this video, we break down: The shift from I/O-bound to compute-bound systems Why traditional Cloud Infrastructure break for AI workloads Why distributed execution becomes a core infrastructure requirement How Ray provides a Python-native distributed execution layer How Ray tasks and actors enable scalable, fault-tolerant compute How Ray fits into the emerging AI compute infrastructure stack alongside PyTorch, vLLM, and Kubernetes Why Ray was originally built for reinforcement learning - and why that still matters today Ray is an open-source distributed computing engine built for scaling AI and Python workloads from a laptop to large clusters. It provides the execution primitives - tasks, actors, scheduling, fault tolerance, and resource awareness - required to coordinate dynamic, compute-heavy systems. 🔎 Chapters 00:00 How AI Workloads Changed System Bottlenecks00:45 I/O-Bound vs Compute-Bound Systems 01:37 Why Traditional Cloud Infrastructure Breaks for AI 03:06 How teams are building today for AI workloads 04:26 Why AI Needs a Distributed Execution Layer 05:07 What is Ray? The Distributed Compute Engine explained 06:05 Quick demo of Ray Tasks and Ray Actors 08:29 The Emerging AI Compute Stack Explained 09:54 Ray’s Origins: Why Ray started with Reinforcement Learning 11:16 Ray Joins the PyTorch Foundation under the Linux Foundation 11:30 How to get started with Ray 🔗 Learn More About Ray Ray Documentation: https://docs.ray.ioRay Website: https://www.ray.io Ray GitHub: https://github.com/ray-project/ray Get started with hands-on labs and templates: https://www.anyscale.com/examples