У нас вы можете посмотреть бесплатно What Happens After We Solve Continual Learning? with Stephanie Chan или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Stephanie Chan from DeepMind visited the Kempner Seminar Series on February 6, 2026, to discuss: "What Happens After We Solve Continual Learning?" Researchers often point to continual learning as a major missing component for modern AI models, and an area in which neuroscience may inform AI model development. With increased focus on this research area, we may soon find ourselves in a world with widely deployed continual learning agents. The benefits are endless, but continual learning also poses major challenges for AI safety and alignment — many existing techniques assume a single static base model (e.g. RLXF-based post-training), and are not suited for dynamically changing models. In this talk, I will lay out some challenges and examples. I will also describe potential starting points for technical solutions, drawing connections to catastrophic forgetting and to Quine’s “web of ideas”. Stephanie Chan is a Staff Research Scientist at Google Deepmind. She received her PhD in computational neuroscience from Princeton University, and an SB in physics and an SB in brain & cognitive sciences, both from MIT. Her research covers a few broad areas. One set of work aims for a scientific understanding of modern AI models, especially in-context learning. She also works on developing AI systems for human enrichment and human empowerment. This has included work in AI for education, and AI to improve the information ecosystem. Now, her primary work revolves around understanding the future impacts of AI on society.