У нас вы можете посмотреть бесплатно Data Centers in Space: The Future of AI Compute | Philip Johnston, или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Join us for an in-depth AMA and podcast conversation with Philip Johnston, Co-founder & CEO of Starcloud, the world’s first orbital data center company. 🚀 We cover: 1. Why orbital infrastructure is the next leap in AI compute 2. Starcloud’s plan to launch megawatt-scale compute into orbit by 2027 3. The challenges of energy, cooling, and scalability in space 4. Lessons from building at the frontier of deep tech, aerospace, and AI This wide-ranging discussion blends technical insight, founder perspective, and long-term vision. Perfect for anyone curious about the future of AI infrastructure, space technology, and frontier startups. 📌 Topics include: orbital data centers, AI compute demand, space infrastructure, scaling deep-tech startups. 00:00 Intro 01:13 What Starcloud is building 02:19 5 GW vision; module approach 03:03 Architecture: central spine + modules 03:27 Solar arrays and radiators 03:58 Demo sat (H100s), Nov target 04:21 Roadmap to 40 MW modules 05:29 Modularity and self‑sufficiency 06:06 Cooling and racks 08:05 Top risks and objections 11:20 Mission life & end‑of‑life 13:16 Disposal options 14:09 Maintenance strategy 17:40 Backhaul plan 18:22 Iteration and capex 19:32 Launch cadence; Sat‑2 service 20:31 Costs/runway overview 22:12 Early customers (DoD/USG) 23:37 Differentiation (H100s) 25:01 EO data bottleneck 25:44 Space‑to‑space optical 26:08 On‑orbit inference example 26:49 Latency: hours → seconds 27:09 Contrarian view (waste heat) 29:01 Q&A 31:48 Debris strategy 35:16 LEO capacity; Lagrange points 38:23 Scale refs; mass & launches 41:20 Launch economics context 42:23 Misconceptions (cooling, latency) 47:19 CAPEX per module 50:57 Closing advice 51:39 Wrap