У нас вы можете посмотреть бесплатно How to Think About Memory in AI Agents (ft. Richmond Alake) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
🔗 AI Engineering Consultancy: https://brainqub3.com/ 🔍 AI Fact-Checking Tool: https://check.brainqub3.com/ Richmond Alake has been thinking about agent memory since Andrew Ng asked him to help teach prompt compression and query optimization. In this conversation, we explore how his background in databases has shaped a different way of thinking about where memory should live in AI agent architectures. We cover the hierarchy that sits above prompt engineering and context engineering, why most agent builders are optimizing at the wrong level, and a principle about memory placement that I've already started applying in my own work. This one's an exploration for anyone building agents and wanting to think more rigorously about how data flows through their systems. Richmond's Links: 📓 Memory & Context Engineering Notebook: https://github.com/oracle-devrel/orac... 💼 LinkedIn: / richmondalake 🎥 YouTube: / @richmond_a Timestamps: 0:00 - Intro [Add remaining timestamps] #AIAgents #ContextEngineering #AgentMemory #LLMs #AIEngineering