У нас вы можете посмотреть бесплатно WTH is Continual Learning ? или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this episode of Deep Learning Talks, we explore one of the most important and unresolved problems on the path to Artificial General Intelligence: continual learning. We have heard about deep learning, representation learning, and reinforcement learning, but continual learning introduces a fundamentally different paradigm. Traditional deep learning operates in two distinct phases: an intensive training phase where model parameters are optimized and a frozen inference phase where the model simply performs forward passes without updating its weights. Once trained, the weights remain static. Humans, however, do not operate this way. Biological intelligence continuously updates, adapts, forgets, and evolves while interacting with the world. This video breaks down the contrast between classical training and lifelong learning. We examine why large neural networks freeze their weights after training, why updating billions or trillions of parameters in real time is computationally infeasible, and why current systems cannot truly learn from every interaction. We discuss core challenges such as catastrophic forgetting, where new learning overwrites previously acquired knowledge, and the stability-plasticity dilemma, which highlights the tension between preserving past knowledge and acquiring new information. We also explore engineering approaches that attempt to approximate continual learning, including Elastic Weight Consolidation, replay-based memory methods, architectural expansion strategies, and online learning techniques. We then analyze how modern large language models simulate continual learning without actually updating their parameters. Retrieval-Augmented Generation, vector databases, and the separation between parametric memory and non-parametric memory allow systems to appear adaptive while keeping core weights frozen. This represents a paradigm shift in how we think about intelligence: decoupling the “brain” from external memory. We also touch on promising research directions such as parameter-efficient fine-tuning methods like LoRA, meta-learning approaches that rethink how models learn to learn, and the role of world models in evolving latent representations beyond static training. Continual learning remains one of the most critical unsolved problems in AI. If we achieve a true breakthrough in this domain, it may bring us significantly closer to AGI. Until then, most modern systems rely on clever engineering hacks rather than genuine lifelong adaptation. In upcoming episodes, we will explore how frontier AI labs are tackling this challenge and what it means for the future of intelligence.