У нас вы можете посмотреть бесплатно BigPanda's Alexander Page On Building AI Agents That Internalize Corrections или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Most AI agents still look great in demos and fall apart in production. Alexander Page, Engineering Director of Applied AI at BigPanda, shares how his team builds agents that internalize user corrections and improve without requiring source data fixes. Learn why evaluating tool call sequences beats tracking final outputs, and how to design multi-agent architectures that actually scale. In this episode, Saket sits down with Alex to unpack production-grade AI agent design for IT operations. From handling outdated Confluence pages to breaking 100-tool systems into domain-specific agents, this conversation covers the practical realities of enterprise AI deployment. Chapters: 00:00 Introduction 00:29 Alex's journey from sales engineering to Applied AI 01:29 Why ChatGPT sparked the move into AI for IT operations 02:38 What makes agents production-ready vs demo-ready 03:54 Building systems that learn and improve over time 04:52 Enterprise considerations and guardrails 05:49 Data access and honoring user permissions 06:24 Framework for deciding which use cases to pursue 07:40 Breaking complex problems into parts 09:01 Data quality challenges in RAG systems 11:25 Traceability and citing sources 12:24 Internalizing user corrections without fixing source data 13:45 Handling data gaps when nothing retrieves 15:20 Human in the loop for corrections 16:39 Prompting and context engineering techniques 17:57 Lost in the middle problem with large context windows 19:33 Why context engineering matters more than token limits 20:06 RAG as a component of agentic systems 23:25 AI tooling and developer productivity 25:20 6-10x productivity gains with Cursor 26:28 Learning model-specific strengths for different tasks 28:09 Evaluating agents by tool call sequences 29:56 Orchestrating multi-agent hierarchies 30:53 Prototype shelf for future foundation model capabilities 31:28 Defining agent responsibilities and tool isolation 34:17 MCP explained and its limitations 36:44 A2A protocol for agent-to-agent communication 37:11 MCP as snake oil when misused 39:40 Accessibility of AI development today 41:06 Advice for building applied AI skills