У нас вы можете посмотреть бесплатно Unlock AI-Powered Observability | Ep19 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Observability is evolving from passive monitoring to intelligent analysis, and CloudWatch is leading the charge. In this episode, we explore how AWS is fundamentally changing the way teams monitor, analyze, and troubleshoot modern applications—from traditional microservices to cutting-edge AI workloads. It starts with Custom Metrics in Application Signals, giving you the power to define application-specific metrics that matter most to your business. Using OpenTelemetry Metrics SDK or Span Metrics, you can now correlate your custom data with standard metrics like latency and fault rates in a unified view, creating the rich, contextual foundation needed for true root cause analysis. But having rich metrics is only half the battle—analyzing them intelligently is what transforms observability. Enter the CloudWatch Application Signals MCP Server, which brings AI assistants like Claude, and Amazon Q directly into your monitoring workflow. Through the Model Context Protocol, your AI assistant can now perform comprehensive service audits, analyze SLO breaches with 100% trace visibility via Transaction Search (not just X-Ray's 5% sampling), and deliver natural language insights from your telemetry data. Imagine asking "Why is my payment service slow?" and getting actionable root cause analysis with correlated traces, logs, and metrics—automatically. This intelligent approach extends to the newest frontier: generative AI applications. With Gen AI Observability now generally available, CloudWatch provides complete visibility into Amazon Bedrock AgentCore deployments, including Built-in Tools, Gateways, Memory, and Identity components. Whether you're using Strands Agents, LangChain, or LangGraph, you get out-of-the-box monitoring of token usage, latency, and errors across your entire AI stack—from model invocations to agent operations. Together, these capabilities represent a shift from reactive monitoring to proactive, AI-assisted observability that helps you ship faster, troubleshoot smarter, and scale confidently.