У нас вы можете посмотреть бесплатно Why AI Fails at Counting Letters (But Solves PhD Math Problems) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Why can an AI solve Olympiad-level mathematics but fail to count the Rs in "strawberry"? This video reveals the fundamental constraint that explains these bizarre failures: LLMs have a fixed computational budget per token. You'll discover why the same model that handles complex physics problems confidently claims 9.11 is larger than 9.9—and how training patterns from Bible verses actually cause this specific error. 📚 Key concepts covered: • Fixed computation per token — Every token gets the same ~100 layers of processing, whether the task is trivial or complex • Tokenization blindness — Models see tokens, not individual characters, making letter counting fundamentally unreliable • Chain-of-thought reasoning — Why "think step by step" dramatically improves accuracy by distributing computation • Training pattern interference — How statistical associations (like Bible verse notation) can override correct reasoning • Code tools as the solution — Why having models write Python is more reliable than "mental math" ━━━━━━━━━━━━━━━━━━━━━━━━ 🎓 ORIGINAL SOURCE ━━━━━━━━━━━━━━━━━━━━━━━━ This video distills concepts from: • Deep Dive into LLMs like ChatGPT Full credit to the original creator for the source material. Please visit the original lecture for the complete, in-depth discussion. ━━━━━━━━━━━━━━━━━━━━━━━━ 📖 About Lecture Distilled ━━━━━━━━━━━━━━━━━━━━━━━━ Long lectures. Short videos. Core insights. We distill lengthy academic lectures into focused concept videos that respect your time while preserving the essential knowledge. 🔗 GitHub: https://github.com/Augustinus12835/au... #LLM #ArtificialIntelligence #MachineLearning #GPT #Tokenization #AIExplained #DeepLearning #NeuralNetworks