У нас вы можете посмотреть бесплатно 2026-01-09 AI’s Connectivity War: Ethernet vs InfiniBand, NVLink vs PCIe 7 + UALink или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
📅 2026-01-09 AI isn’t running out of compute—it’s running out of connection. Today on BD Deep Dive (Where ideas go deeper.), we unpack the GPU interconnect arms race: why training is increasingly “communication-bound,” how Ethernet (RoCE) clawed its way into the data center, why NVIDIA’s NVLink still rules inside the box, and how PCIe 6.0/7.0 Multistream plus UALink are trying to close a brutal bandwidth gap. We’ll also zoom out to the next frontier: modular “Lite-GPU” designs and co-packaged optics that could rewrite how accelerators are built—and even bring cluster-style AI to the desktop. 🎯 Highlights Why model scaling turned networking into the real bottleneck for training InfiniBand’s latency crown vs Ethernet’s tuned RoCE “good enough” reality The 512-GPU economics: TCO savings that can buy more GPUs NVLink’s scale-up dominance and the standards world’s counterpunch (PCIe Multistream + UALink) The hardware future: Lite-GPUs, yield math, and co-packaged optics Subscribe for more practical deep dives. Like/comment: Which matters more for your workloads—lowest latency, lowest cost, or open standards? #AI #GPUs #Networking #Ethernet #InfiniBand #NVLink #PCIe #RoCE #DataCenter #Hardware CHAPTERS 00:00:00 GPU interconnect dominance battle overview 00:00:36 LLM scaling shifts bottleneck to communication 00:01:30 Three fronts: data center, scale-up, emerging 00:02:16 InfiniBand leads with ultra-low latency 00:02:50 Ethernet ascends with tuned RoCE performance 00:03:46 Ethernet wins on TCO for many clusters 00:04:34 NVLink dominates inside-server GPU bandwidth 00:05:12 PCIe Multistream reduces wasted bandwidth cycles 00:05:55 UALink aims to close NVLink bandwidth gap 00:06:37 Lite-GPU modular shift enabled by CPO optics 00:07:37 Desktop enters the interconnect race with RDMA 00:08:32 Software boosts efficiency: sparsity and orchestration 00:09:17 Next decade trends: openness, optics, software 00:10:00 Conclusions: trade-offs and no single winner