У нас вы можете посмотреть бесплатно Improving AI Inference with AMD EPYC Host CPUs | Signal65 Webcast или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
00:01:00 - The Role of CPUs in AI Compute 00:03:00 - High-Performance CPUs for AI Workloads 00:05:00 - Different CPU Requirements for AI Workloads 00:06:30 - Testing CPU Performance in AI 00:08:00 - Impact of CPU on AI Model Performance 00:10:00 - Trends in AI Performance Bottlenecks 00:11:30 - Financial Benefits of CPU Selection 00:13:00 - Power and Space Efficiency in Data Centers 00:14:30 - Guidance for AI Infrastructure Investment 00:16:00 - AMD's Comprehensive AI Solutions AI performance gains are increasingly determined by what happens before and after the GPU. In this Signal65 webcast, Ryan Shrout, Russ Fellows, and Mitch Lewis are joined by Madhu Rangarajan, Corporate VP, Compute and Enterprise AI Products at AMD, and Curt Waltman, Senior Director, Compute and Enterprise AI Products at AMD, to explore how AMD EPYC processors are improving AI inference performance in enterprise environments. As AI workloads move from experimentation to production, the efficiency and scalability of the host platform become critical. This discussion breaks down how EPYC CPUs support AI acceleration, optimize data movement, and deliver measurable performance improvements in real-world deployments. Key Takeaways: 🔹 Inference is infrastructure-bound: AI performance is heavily influenced by host CPU architecture, not just accelerators. 🔹 Data movement is a bottleneck: Memory bandwidth, I/O, and interconnects significantly impact AI workload efficiency. 🔹 CPU + GPU synergy matters: Optimizing inference requires tight integration between EPYC CPUs and AI accelerators. 🔹 Enterprise AI requires balance: Power efficiency, core density, and scalability determine real-world deployment success. 🔹 Platform-level optimization wins: AI performance is achieved through system-level engineering, not component-level thinking. To understand how EPYC CPUs are shaping AI inference performance in enterprise data centers: https://www.amd.com/en.html Read the Signal65 research paper: https://signal65.com/research/ai/impr... #AMD #AIInference #CPUs #GPUs #Infrastructure #PerformanceAnalysis Disclaimer: Six Five Media is for information and entertainment purposes only. Over the course of this webcast, we may talk about companies that are publicly traded, and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors, and we ask that you do not treat us as such.