У нас вы можете посмотреть бесплатно Methods for Evaluating Your GenAI Application Quality или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Ensuring the quality and reliability of Generative AI applications in production is paramount. This session dives into the comprehensive suite of tools provided by Databricks, including inference tables, Lakehouse Monitoring, and MLflow to facilitate rigorous evaluation and quality assurance of model responses. Discover how to harness these components effectively to conduct both offline evaluations and real-time monitoring, ensuring your GenAI applications meet the highest standards of performance and reliability. We'll explore best practices for using LLMs as judges to assess response quality, integrating MLflow for tracking experiments and model versions, and leveraging the unique capabilities of inference tables and Lilac for enhanced model management and evaluation. You'll learn how to optimize your workflow and also ensure your GenAI applications are robust, scalable, and aligned with your production goals. Talk By: Alkis Polyzotis, Senior Staff Software Engineer, Databricks ; Michael Carbin, Principal Researcher, Databricks Here's more to explore: LLM Compact Guide: https://dbricks.co/43WuQyb Big Book of MLOps: https://dbricks.co/3r0Pqiz Connect with us: Website: https://databricks.com Twitter: / databricks LinkedIn: / data… Instagram: / databricksinc Facebook: / databricksinc