У нас вы можете посмотреть бесплатно Top 3 metrics for reliable LLM performance или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Generative AI is moving fast, but how do you know your LLMs are performing reliably? In this lightning talk, Richard Shan from CTS explains why observability matters, which metrics to track, and how developers can ensure their AI models deliver accurate, coherent, and timely outputs. Learn practical tips to monitor your systems and gain confidence in every deployment. Practical Generative AI Observability: Metrics & Tools for Real-Time Monitoring Presented at All Things Open AI 2025 Presented by Richard Shan - CTS Title: Practical Generative AI Observability: Metrics and Tools for Real-Time Monitoring Abstract: As generative AI systems power ever more critical applications, ensuring the reliability, fairness, and performance of these systems demands robust observability frameworks. This presentation focuses on the emerging discipline of Generative AI Observability through a deep dive into strategies, methods, and best practices for real-time monitoring of generative systems. Attendees will learn metrics techniques to track key performance indicators such as output coherence, accuracy, and latency, while also gaining insights into how to detect and mitigate issues like bias, hallucination, and model drift. We'll explore state-of-the-art observability tools designed for generative AI including those tailored for large language models, RAG frameworks, and multimodal systems. The discussion will cover innovation in monitoring the components in the pipeline, from data collection and preprocessing to inference execution and outputs, as well as the integration of observability into LLMOps workflows for continuous improvement. The talk will walk through real-world cases to show how leading organizations maintain reliability, transparency, and ethical compliance in their generative AI solutions. By the end of the session, participants will have actionable knowledge to construct and support observability frameworks that improve system robustness and make their generative AI applications trustworthy and accountable. Find more info about All Things Open: On the web: https://www.allthingsopen.org/ Twitter: / allthingsopen LinkedIn: / all-things-open Instagram: / allthingsopen Facebook: / allthingsopen Mastodon: https://mastodon.social/@allthingsopen Threads: https://www.threads.net/@allthingsopen Bluesky: https://bsky.app/profile/allthingsope... 2025 conference: https://2025.allthingsopen.org/