У нас вы можете посмотреть бесплатно Democratizing Large Model Training on Smaller GPUs with FSDP или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
About the Talk Democratizing Large Model Training on Smaller GPUs with FSDP At this DevDay talk, Preethi Srinivasan, Solution Consultant at Sahaj Software, demonstrated how combining QLoRA and Fully Sharded Data Parallel (FSDP) made large-model training possible on smaller, consumer-grade GPUs, bridging the gap between enterprise hardware and accessible AI research. Preethi walked the audience through the evolution from traditional parallelism strategies, data, model, and pipeline to advanced techniques like FSDP, explaining how these approaches drastically reduced memory usage, training time, and communication overhead while improving GPU utilization. She also discussed quantization, mixed precision, CPU offloading, and activation checkpointing, outlining the trade-offs that come with each optimization. 📌 This talk covered: How QLoRA helps reduce the memory footprint for large models The shift from traditional to advanced parallelism approaches How FSDP improves scalability and training efficiency Practical insights into mixed precision, CPU offloading, and checkpointing Trade-offs between speed, accuracy, and resource usage About the Speaker Preethi Srinivasan Solution Consultant, Sahaj Software Preethi holds an M.S. (by Research) degree from IIT Mandi, where her thesis focused on medical image post-processing. She is the first author of publications at ACCV, WiML, and IEEE CBMS, and has developed ML prototypes in video understanding, LLM fine-tuning, and RAG-based QA at Sahaj. Her widely read blog series on LoRA and intrinsic dimension has earned her speaking engagements at PyCon India 2024 and The Fifth Elephant 2025. 🎥 Watch the full talk to see how QLoRA and FSDP brought large-model training to smaller GPUs making advanced AI development achievable beyond big-budget infrastructure.