У нас вы можете посмотреть бесплатно DeepSeek-V3.2: Gold-Medal AI via Sparse Attention или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
The paper introduces the DeepSeek-V3.2 Large Language Model (LLM) framework, explicitly designed to bridge the performance gap between open-source and proprietary systems. A key technical advancement is the DeepSeek Sparse Attention (DSA) mechanism, which significantly improves efficiency by reducing computational complexity for processing long-context sequences. The model's reasoning and agentic proficiencies were enhanced through a scalable reinforcement learning framework that allocates substantial post-training compute and a novel synthesis pipeline for generating large-scale agentic tasks. DeepSeek-V3.2 achieves performance parity with closed-source models like GPT-5 on standard benchmarks, demonstrating strong tool-use and generalization capabilities. Notably, the high-compute variant, DeepSeek-V3.2-Speciale, achieved gold-medal performance in the 2025 International Mathematical Olympiad (IMO) and other top-tier competitions, reaching capability parity with models such as Gemini-3.0-Pro. Overall, the work establishes a new performance milestone for open LLMs, though the authors note challenges in world knowledge and token efficiency remain.