У нас вы можете посмотреть бесплатно Nvidia Acquires SchedMD: Slurm and the AI Stack или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Nvidia Buys Slurm Developer SchedMD to Boost Open-Source AI Stack NEWS RECAP (WHAT HAPPENED) Nvidia $NVDA announced it has acquired SchedMD, the company behind Slurm, an open-source workload manager used to schedule and manage large compute jobs across clusters. Nvidia says Slurm will remain open-source and vendor-neutral, and it plans to keep investing in its development. Deal terms were not disclosed. WHY THIS MATTERS FOR TRADERS Slurm is “plumbing” for AI and HPC: it decides which jobs run where, when, and on how many GPUs and nodes. If Nvidia can help make Slurm better integrated, easier to operate, and more efficient on modern GPU clusters, that can increase GPU utilisation, reduce idle time, and make Nvidia-based infrastructure more attractive at scale. This is also another step in Nvidia building a broader ecosystem moat beyond chips, while still keeping the tool open to maintain industry trust. WINNERS (3 CATEGORIES) Nvidia GPU ecosystem and AI server builders Why: Better scheduling and cluster efficiency can accelerate GPU cluster deployments and upgrades, supporting demand for Nvidia-centric systems. Names: $NVDA, $SMCI, $DELL, $HPE AI cloud and GPU capacity providers Why: Slurm is widely used to run and allocate massive AI training and inference workloads. Improvements and deeper support can lower friction for customers renting large GPU clusters and scaling workloads. Reuters also noted SchedMD customers include CoreWeave. Names: $AMZN, $MSFT, $GOOGL, $CRWV Data centre networking and interconnect beneficiaries Why: More efficient GPU clusters typically means more scaling, and scaling GPU clusters pulls through high-speed networking, switching, and interconnect spend. Names: $ANET, $AVGO, $MRVL LOSERS (3 CATEGORIES) Rival accelerator and alternative platform vendors Why: Nvidia is tightening its grip on the “full stack” (hardware plus critical infrastructure software). That can raise switching costs and make it harder for competing accelerators to win large, standardised deployments. Names: $AMD, $INTC, $ARM Proprietary HPC scheduler and workload-management vendors Why: If Nvidia helps keep Slurm the default, best-supported open option, it can pressure paid, proprietary scheduling platforms used in some HPC environments. (IBM markets Spectrum LSF as an HPC workload management and job scheduler.) Names: $IBM, $ORCL Paid AI platform and MLOps vendors (second-order watchlist) Why: Nvidia is clearly pushing harder into open-source across the AI stack (software infrastructure plus open models). If more capability becomes “good enough” and widely available, some paid layers could see pricing pressure or slower growth in certain customer segments. Names: $AI, $SNOW, $PLTR #StockMarket #Trading #Investing #DayTrading #SwingTrading #Nvidia #AI #OpenSource #HPC #DataCenter #Semiconductors #CloudComputing