У нас вы можете посмотреть бесплатно The Custom Silicon Tollbooth или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Tech giants are bypassing Nvidia’s 70% AI tax by building custom silicon. But Google and Meta can't do it without this hidden hardware partner. Nvidia built an empire selling generalized GPUs that function like massive, expensive Swiss Army knives. While these chips were perfect for the chaotic trial-and-error phase of training large language models, they are an absolute nightmare for the daily, repetitive grind of AI inference. For hyperscalers running millions of user queries a second, using a highly generalized H100 is like using a heavy-duty pickup truck to deliver a single envelope. To escape Nvidia's punishing 70% gross margins and severe power constraints, cloud providers like Google, AWS, and Meta are pivoting hard to custom ASICs. Chips specifically designed for inference can slash power consumption by 65% and drastically lower the total cost of ownership. But there is a massive catch. Software giants employ thousands of world-class algorithm writers, but they lack the deep, atomic-level engineering required to physically map 100 billion microscopic transistors onto a piece of silicon. They have the architectural vision, but they don't know how to physically build the chip or integrate it with TSMC's cutting-edge foundry equipment. Enter Marvell Technology. Operating entirely behind the scenes, Marvell acts as the ultimate translation layer between theoretical software ambitions and harsh physical manufacturing realities. They provide the highly specialized "surround silicon"—like SerDes and high-speed optical interconnects—that actually allows a custom AI processor to communicate inside a massive server rack without creating crippling data bottlenecks. By charging upfront non-recurring engineering fees, Marvell forces the tech giants to underwrite all the R&D risk. Once the custom chip hits high-volume production, Marvell locks in multi-year purchase orders, effectively building an inescapable, multi-generational tollbooth on the future of AI hardware infrastructure. Do you think Nvidia can maintain its hardware monopoly, or will custom silicon completely take over AI inference? Drop your take in the comments. Subscribe and hit the bell so you catch the next story. DISCLAIMER: The content on this channel is for educational and informational purposes only. Nothing presented here constitutes investment advice, financial advice, trading advice, or any other type of professional advice. You should not treat any of the content as a recommendation to buy, sell, or hold any security, stock, or financial instrument. Always conduct your own research and consult with a qualified financial advisor before making any investment decisions.