У нас вы можете посмотреть бесплатно Think Deep, Not Just Long или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
https://arxiv.org/pdf/2602.13517 Think Deep: Measuring LLM Reasoning via Deep-Thinking Tokens This research introduces the Deep-Thinking Ratio (DTR), a novel metric designed to measure the true reasoning effort of large language models by analyzing their internal processing layers. While traditional methods rely on output length as a proxy for intelligence, the authors argue that long responses often indicate unreliable overthinking rather than quality. Instead, they identify deep-thinking tokens as those whose final predictions only stabilize in the model's deepest layers after significant internal revision. Their findings show that a high DTR correlates more strongly with accuracy across complex math and science benchmarks than either token count or model confidence. Leveraging this insight, the researchers developed Think@n, a strategy that improves efficiency by selecting responses with high DTR values. This approach allows models to match the performance of standard methods while reducing computational costs by approximately half through the early rejection of weak reasoning paths. Ultimately, the work suggests that evaluating how a model thinks internally is far more effective than simply measuring how much it generates. #ai #research #largelanguagemodels #deeplearning