У нас вы можете посмотреть бесплатно Microsoft Launched 3nm Maia 200 Monster Chip - Now AI No Longer Needs NVIDIA & AMD! или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
AI just hit three walls at the same time: compute cost, model scale, and the physical world. And the companies responding to those limits are quietly reshaping who will control AI over the next decade. In this video, we break down three developments that look unrelated on the surface, but are actually part of the same shift: Microsoft’s Maia 200, Cerebras’ wafer-scale chip, and a new humanoid robot that doesn’t move like it’s programmed. For years, every major cloud provider has depended on one company: NVIDIA. Its GPUs power almost all modern AI training and inference, and that dependency has become expensive, rigid, and increasingly risky. Microsoft’s response isn’t to fight Nvidia on raw power; it’s to quietly escape. Microsoft’s Maia 200 is a custom AI accelerator built specifically for inference at massive Azure scale. It isn’t a GPU, and it isn’t trying to be everything. It’s designed to run Microsoft’s own models as cheaply and efficiently as possible, cutting the cost of inference, the part of AI that never stops costing money. At the same time, Cerebras Systems is attacking the problem from the opposite direction. Instead of building clusters of thousands of chips, Cerebras built a single chip the size of a dinner plate. With hundreds of thousands of cores on one wafer, it eliminates the data-movement bottleneck that slows down large-scale AI training. Investors aren’t valuing Cerebras at $23 billion because of current revenue; they’re betting that the GPU-centric training model is reaching its limits. Then there’s the third shift: AI leaving the data center. A humanoid robot demoed in China recently showed movement that didn’t look scripted or pre-programmed. It adapted in real time, reacting to the environment as it moved. That matters because language models talk, but robots act. And once AI can reliably act in the physical world, the economic impact moves far beyond chatbots into factories, logistics, construction, and infrastructure. What ties all three stories together is control. Custom silicon breaks dependence. Wafer-scale compute shortens training cycles. Physical AI turns software intelligence into a real-world force. This isn’t about “better models” anymore; it’s about who owns compute, who controls inference costs, and who gets to decide how AI actually shows up in the world. The GPU monopoly isn’t collapsing overnight, but it’s fragmenting. Hyperscalers are going custom. Training paradigms are shifting. Robots are moving from demos to deployment. And over the next few years, we’ll finally see which AI businesses are sustainable and which were built on subsidized hype. And if you want to stay ahead of these shifts before they become obvious, make sure to like and subscribe to Evolving AI for daily coverage of what’s actually happening in AI.