Π£ Π½Π°Ρ Π²Ρ ΠΌΠΎΠΆΠ΅ΡΠ΅ ΠΏΠΎΡΠΌΠΎΡΡΠ΅ΡΡ Π±Π΅ΡΠΏΠ»Π°ΡΠ½ΠΎ An update on DPO vs PPO for LLM alignment ΠΈΠ»ΠΈ ΡΠΊΠ°ΡΠ°ΡΡ Π² ΠΌΠ°ΠΊΡΠΈΠΌΠ°Π»ΡΠ½ΠΎΠΌ Π΄ΠΎΡΡΡΠΏΠ½ΠΎΠΌ ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅, Π²ΠΈΠ΄Π΅ΠΎ ΠΊΠΎΡΠΎΡΠΎΠ΅ Π±ΡΠ»ΠΎ Π·Π°Π³ΡΡΠΆΠ΅Π½ΠΎ Π½Π° ΡΡΡΠ±. ΠΠ»Ρ Π·Π°Π³ΡΡΠ·ΠΊΠΈ Π²ΡΠ±Π΅ΡΠΈΡΠ΅ Π²Π°ΡΠΈΠ°Π½Ρ ΠΈΠ· ΡΠΎΡΠΌΡ Π½ΠΈΠΆΠ΅:
ΠΡΠ»ΠΈ ΠΊΠ½ΠΎΠΏΠΊΠΈ ΡΠΊΠ°ΡΠΈΠ²Π°Π½ΠΈΡ Π½Π΅
Π·Π°Π³ΡΡΠ·ΠΈΠ»ΠΈΡΡ
ΠΠΠΠΠΠ’Π ΠΠΠΠ‘Π¬ ΠΈΠ»ΠΈ ΠΎΠ±Π½ΠΎΠ²ΠΈΡΠ΅ ΡΡΡΠ°Π½ΠΈΡΡ
ΠΡΠ»ΠΈ Π²ΠΎΠ·Π½ΠΈΠΊΠ°ΡΡ ΠΏΡΠΎΠ±Π»Π΅ΠΌΡ ΡΠΎ ΡΠΊΠ°ΡΠΈΠ²Π°Π½ΠΈΠ΅ΠΌ Π²ΠΈΠ΄Π΅ΠΎ, ΠΏΠΎΠΆΠ°Π»ΡΠΉΡΡΠ° Π½Π°ΠΏΠΈΡΠΈΡΠ΅ Π² ΠΏΠΎΠ΄Π΄Π΅ΡΠΆΠΊΡ ΠΏΠΎ Π°Π΄ΡΠ΅ΡΡ Π²Π½ΠΈΠ·Ρ
ΡΡΡΠ°Π½ΠΈΡΡ.
Π‘ΠΏΠ°ΡΠΈΠ±ΠΎ Π·Π° ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ ΡΠ΅ΡΠ²ΠΈΡΠ° ClipSaver.ru
A casual chat on our experiments trying to figure out which one is best. Paper referenced: https://arxiv.org/abs/2406.09279 Abstract: Learning from preference feedback has emerged as an essential step for improving the generation quality and performance of modern language models (LMs). Despite its widespread use, the way preference-based learning is applied varies wildly, with differing data, learning algorithms, and evaluations used, making disentangling the impact of each aspect difficult. In this work, we identify four core aspects of preference-based learning: preference data, learning algorithm, reward model, and policy training prompts, systematically investigate the impact of these components on downstream model performance, and suggest a recipe for strong learning for preference feedback. Our findings indicate that all aspects are important for performance, with better preference data leading to the largest improvements, followed by the choice of learning algorithm, the use of improved reward models, and finally the use of additional unlabeled prompts for policy training. Notably, PPO outperforms DPO by up to 2.5% in math and 1.2% in general domains. Slides: https://docs.google.com/presentation/... Synthetic data piece: https://www.interconnects.ai/p/fronti... Slides taken from recent Stanford Lecture: https://docs.google.com/presentation/...