У нас вы можете посмотреть бесплатно LLM-JEPA: Large Language Models Meet Joint Embedding Predictive Architectures или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This paper introduces LLM-JEPA, a novel Joint Embedding Predictive Architecture (JEPA) approach for enhancing Large Language Models (LLMs). It addresses the discrepancy between language training, which relies on input-space reconstruction, and vision training, where embedding-space objectives like JEPAs are more effective. LLM-JEPA combines the standard LLM training objective with a JEPA objective, improving abstraction capabilities. The method leverages datasets with multiple views of the same underlying knowledge, such as text and code. Empirical results demonstrate that LLM-JEPA outperforms standard LLM training across various models and datasets. The findings suggest the potential for JEPA-centric pretraining and finetuning in LLMs, improving their reasoning and generative capabilities. The study contributes a new JEPA-based training objective and extensive validation across different models and datasets. #LLM #JEPA #NLP #MachineLearning #RepresentationLearning #DeepLearning #AI paper - http://arxiv.org/pdf/2509.14252v1 subscribe - https://t.me/arxivpaper donations: USDT: 0xAA7B976c6A9A7ccC97A3B55B7fb353b6Cc8D1ef7 BTC: bc1q8972egrt38f5ye5klv3yye0996k2jjsz2zthpr ETH: 0xAA7B976c6A9A7ccC97A3B55B7fb353b6Cc8D1ef7 SOL: DXnz1nd6oVm7evDJk25Z2wFSstEH8mcA1dzWDCVjUj9e created with NotebookLM