У нас вы можете посмотреть бесплатно Dr. GRPO: Understanding R1-Zero-Like Training with Zichen Liu или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
R1-Zero like training dominated 2025 for their usefulness but also for the mystery behind how they worked. I had the opportunity to talk to the first author of the Dr. GRPO algorithm, which improves upon the original algorithm! enjoy! 📌 also learn to code from full-stack to AI with Scrimba https://scrimba.com/?via=yacineMahdid (extra 20% off pro with my link, great resource, I love the team) Table of Content intro: 0:00 start of the interview: 0:50 background of zichen: 1:20 LLM post-training: 3:00 summarization of R1-Zero-Like training: 5:10 v3 base model ahah moment: 9:42 is self reflexion real?: 13:20 what would happen if we penalizing self reflexion keywords: 17:10 fusing of keyword/llm-based detection: 18:50 can you trust the llm-as-a-judge: 20:00 what's up with qwen: 27:40 Dr. GRPO overview: 29:00 why that term is there at all?: 37:40 GRPO nature paper removed the bias term???: 41:15 how compaptible Dr. GRPO with GSPO?: 43:40 is there drawback of Dr. GRPO?: 48:28 is there other terms we can remove?: 50:40 balance in the algorithm engineering: 52:50 next research for the lab: 59:00 conclusion: 1:08:00 paper link 📌 https://arxiv.org/pdf/2503.20783 Follow Zichen Liu on twitter: 📌 https://x.com/zzlccc --- Join the newsletter for weekly AI content: https://yacinemahdid.com Join the Discord for general discussion: / discord Follow Me Online Here: Twitter: / yacinelearning LinkedIn: / yacinemahdid --- Have a great week! 👋