У нас вы можете посмотреть бесплатно The End of Brute Force AI? How Small Models are Beating Giants 🥊 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Headline: Is the AI "Arms Race" moving in the wrong direction? 🏎️💨 For years, the industry has followed the "Training Scaling Laws"—the idea that more data and bigger models always equal smarter AI. But we're seeing a shift. Just like a student who studies for 1,000 hours vs. one who uses their exam time more effectively, AI is learning to optimize its "thinking budget." In this breakdown, we explore: The Scaling Law Shift: Moving from Training Laws to Inference Scaling Laws. The "Monkey Method": How generating thousands of samples (and using a verifier) can boost success rates from 16% to 56%! 🐒 The Verifier Bottleneck: Why finding the "needle in the haystack" is the next big hurdle for researchers. Compute-Optimal Scaling: How Google DeepMind is teaching AI to decide when to "refine" an easy answer vs. "explore" a hard one. 🗺️ Can a model 14x smaller than the giants actually be more effective? Let's look at the data! Key Concepts to Highlight: Coverage: The probability that a correct answer exists within a generated set. Inference Budget: The amount of compute power used after a model is already trained. Smarter Test-Taking: The shift from memorization (training) to reasoning (inference). #AIReasoning #GoogleDeepMind #LargeLanguageMonkeys #InferenceScaling #AITutorial #BusinessAnalysis #DataVisualization #TechTrends2026 #PowerBI