У нас вы можете посмотреть бесплатно Is the Nvidia Monopoly Over? Fine-tuning our Liquid AI LLMs on AMD MI325X или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Is training on AMD hardware actually viable in 2026? For years, the narrative has been that AMD Instinct GPUs are great for inference but a nightmare for training due to software fragmentation. This is not true anymore. In this video, I fine-tune our Liquid AI's LFM2.5-1.2B-Instruct using the AMD Instinct MI325X GPUs. 💡 The Hardware Context: For those unfamiliar, the AMD MI325X is the direct competitor and equivalent to the NVIDIA H200. It is the current heavy-duty workhorse for AI, boasting over 256GB of HBM (memory). While it isn't the absolute newest silicon on the market, it represents the tier of hardware that most serious labs are deploying right now. 🛠️ The Stack: The most impressive part of this demo isn't the speed, it's the simplicity. OS: Linux (Ubuntu) Driver: ROCm 7.1 Libraries: Standard PyTorch + Hugging Face Transformers Provider: Tensorwave There are no custom Docker containers, no obscure forks, and no complex kernel hacks. If you know how to fine-tune on Nvidia, you now know how to fine-tune on AMD.