У нас вы можете посмотреть бесплатно Understanding the new Hierarchical Reasoning Model (HRM) by Sapient Intelligence или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this video, we dive into the Hierarchical Reasoning Model (HRM) – a new approach to AI reasoning inspired by the human brain. This has been recently introduced by Sapient Intelligence, Singapore, in a research paper titled "Hierarchical Reasoning Model" - https://arxiv.org/pdf/2506.21734 CHAPTERS 00:00 Introduction 02:22 Chain of Thought Reasoning 03:06 Issues with Chain of Thought 04:58 Benchmark Tests 06:22 HRM Training & Testing Methodology 07:03 Benchmark Results and Comparison 09:34 What is HRM? 10:27 Technical working of HRM - Single forward pass 14:47 Summary 15:46 What does this mean for AI Research? Reasoning means making and executing a sequence of steps to reach a goal — plan, execute, and reflect. Current state-of-the-art large language models (LLMs) do this using Chain-of-Thought (CoT) prompting, which “externalizes” reasoning into words by breaking complex tasks into simpler steps. But CoT has issues: Brittle task decomposition – one wrong step can break the whole answer. Requires extensive training data. High latency – slow responses because it writes every step. It’s also not great for tasks needing complex algorithmic reasoning. HRM takes a different approach: It’s hierarchical, with two interconnected “thinking” modules: High-level (H) module: slow, abstract planning. Low-level (L) module: rapid, detailed computations. These modules communicate via hidden states (zH and zL), running in N × T cycles: Input network processes the problem. L module runs multiple fast steps per cycle. H module updates after each cycle to adjust the plan. Output network produces the final answer. With only 27 million parameters, HRM works without pre-training or CoT data, yet outperforms much larger models on tough reasoning benchmarks like ARC, Sudoku-Extreme, and Maze-Hard. #AI #MachineLearning #DeepLearning #Reasoning #ChainOfThought #HierarchicalReasoningModel #HRM #ArtificialIntelligence #Sudoku #MazeSolver #ARCAGI #NeuralNetworks #BrainInspiredAI #LargeLanguageModels #LLM #AIResearch #AIBenchmarks #Neuroscience #AIArchitecture #AlgorithmicReasoning