У нас вы можете посмотреть бесплатно How to Fix AI Before It Forgets Everything или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
The most popular fix for context rot — asking AI to summarize and continue — actually makes things worse. JetBrains presented research at a NeurIPS workshop showing simpler techniques matched or beat AI summarization, while MIT researchers found context compaction is "rarely expressive enough" for real tasks. What you'll learn: → Token Budget: How a 15-minute chat burns 6,000 tokens (and how to measure it) → The Complexity Trap: Why AI summaries lose information and cost more tokens → The State Snapshot: A simple document that replaces lossy AI summaries → The One Goal Rule: Why every conversation should be disposable This isn't theory — it's backed by JetBrains, MIT, Anthropic, OpenAI documentation, and the ContextBranch study showing 58.1% less context used with the same quality results. 00:00 - Context Rot Part 2: 4 fixes that actually work 00:29 - Recap: 50% Rule, Signal-to-Noise, Multi-Needle Collapse 01:24 - For ChatGPT, Claude, Gemini chat users (not devs) 02:08 - Token Budget: 150 words/min = 6,000 tokens in 15 min 03:40 - Google AI Studio: Free token counter (live demo) 05:04 - Real conversation test: Token count revealed 06:08 - Excel/CSV files: Hidden token weight + optimization 08:59 - The Complexity Trap: Why AI summaries backfire (JetBrains) 11:35 - Karpathy: "New Conversation for each request" 12:07 - The State Snapshot: Manual context document template 14:52 - The One Goal Rule: One mission per conversation 16:13 - Project Instructions: Best seat in the context window 16:52 - Summary: 4 fixes + honest correction on memory banks 📚 SOURCES & RESEARCH The 50% Rule — Advertised vs Actual Context Length Hsieh et al. — "RULER: What's the Real Context Size of Your Long-Context Language Models?" (COLM 2024, peer-reviewed) https://arxiv.org/abs/2404.06654 Multi-Needle Collapse — 99.7% Single Needle Retrieval Reid et al. — "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context" (Google DeepMind, February 2024) https://arxiv.org/abs/2403.05530 Non-English Token Multiplier — 1.5× to 13× More Tokens Ahia et al. (2023); Yennie Jun — "All Languages Are NOT Created (Tokenized) Equal" (peer-reviewed) https://arxiv.org/abs/2410.05864 Tools and Connectors Are Token-Intensive Anthropic — "How large is the context window on paid Claude plans?" (official support documentation) https://support.anthropic.com/en/arti... The Complexity Trap — Simple Techniques Beat AI Summarization Lindenbauer et al. — "The Complexity Trap: Simple Observation Masking Is as Efficient as LLM Summarization" (DL4Code Workshop at NeurIPS 2025, workshop peer-reviewed) https://arxiv.org/abs/2508.21433 Context Compaction — "Rarely Expressive Enough" Zhang, Kraska & Khattab — "Recursive Language Models" (arXiv preprint, December 2025, not yet peer-reviewed) https://arxiv.org/abs/2512.24601 Summarizer Variability and Drift Warning OpenAI — "Context Engineering - Short-Term Memory Management with Sessions" (official Cookbook / Agents SDK documentation) https://cookbook.openai.com/examples/... ContextBranch — 58.1% Less Context, Same Quality Nanjundappa et al. — "Context Branching for LLM Conversations" (arXiv preprint, December 2025, not yet peer-reviewed) https://arxiv.org/abs/2512.13914 Auto-Compaction Behavior Anthropic — Compaction documentation (official API documentation) https://docs.anthropic.com/en/docs/bu... Project Instructions — Beginning of Context Window Gets Priority Liu et al. — "Lost in the Middle: How Language Models Use Long Contexts" (TACL 2024, peer-reviewed) https://arxiv.org/abs/2307.03172 Andrej Karpathy on Starting New Conversations Karpathy — "When working with LLMs I am used to starting 'New Conversation' for each request" (X/Twitter post) https://x.com/karpathy/status/1902737... #contextrot #aimemory #contextwindow #chatgpt #claude #TokenLimit #promptengineering #aiworkflow