У нас вы можете посмотреть бесплатно [Geodesic JEPA] Semantic Tube Prediction: Geodesic Geometry for Data-Efficient LLMs. STP-JEPA. или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
We’ve been told for years now that in the world of Large Language Models, 'Scale is King.' The recipe seemed simple: more data, more compute, and more parameters. But what if we’re hitting the limit of brute force? What if the secret to smarter AI isn’t more data, but better geometry? Welcome to the show. Today, we’re tearing up the standard scaling law playbook to look at a radical new framework: Semantic Tube Prediction, or STP. Most models treat token sequences like a chaotic cloud of points. But STP operates on a different premise called the Geodesic Hypothesis. It suggests that high-quality reasoning doesn't just wander aimlessly—it follows locally linear paths along a smooth semantic manifold. By using a JEPA-style regularizer, STP essentially builds a 'tube' around these optimal trajectories, forcing the model’s internal hidden states to stay on track and tune out the statistical noise. The results? We're seeing models reach peak accuracy in math, coding, and logic with a fraction of the training data usually required. And the best part for the architects out there: it does this without the overhead of extra forward passes or complex scaffolding. Is the era of massive, inefficient pre-training coming to an end? Is the future of AI found in the curves of a geodesic path? Today, we’re going inside the 'tube' to find out. Let’s get started.