У нас вы можете посмотреть бесплатно Intro to LLM Monitoring in Production with LangKit & WhyLabs или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Workshop links: WhyLabs free sign-up: https://whylabs.ai/free Notebook: https://s.whylabs.io/LLM-Monitoring-L... LangKit GitHub: https://github.com/whylabs/langkit Join The AI Slack group: https://bit.ly/r2ai-slack Join this hands-on workshop to implement ML monitoring on large language models (LLMs) with WhyLabs LangKit. The ability to effectively monitor and manage large language models (LLMs) like GPT from OpenAI has become essential in the rapidly advancing field of AI. WhyLabs, in response to the growing demand, has created a powerful new tool, LangKit, to ensure LLM applications are monitored continuously and operated responsibly. Join our workshop designed to equip you with the knowledge and skills to use LangKit with Hugging Face models. Guided by our team of experienced AI practitioners, you'll learn how to evaluate, troubleshoot, and monitor large language models more effectively. Once completed, you'll also receive a certificate! This workshop will cover how to: Understand: Evaluate user interactions to monitor prompts, responses, and user interactions Guardrail: Configure acceptable limits to indicate things like malicious prompts, toxic responses, hallucinations, and jailbreak attempts. Detect: Set up monitors and alerts to help prevent undesirable behavior What you’ll need: A free WhyLabs account (https://whylabs.ai/free) A Google account (for saving a Google Colab) Who should attend: Anyone interested in building applications with LLMs, AI Observability, Model monitoring, MLOps, and DataOps! This workshop is designed to be approachable for most skill levels. Familiarity with machine learning and Python will be useful, but it's not required to attend. By the end of this workshop, you’ll be able to implement ML monitoring techniques to your large language models (LLMs) to catch deviations and biases. Bring your curiosity and your questions. By the end of the workshop, you'll leave with a new level of comfort and familiarity with LangKit and be ready to take your language model development and monitoring to the next level. About the instructor: Sage Elliott enjoys breaking down the barrier to AI observability, talking to amazing people in the Robust & Responsible AI community, and teaching workshops on machine learning. Sage has worked in hardware and software engineering roles at various startups for over a decade. Connect with Sage on LinkedIn: / sageelliott About WhyLabs: WhyLabs.ai is an AI observability platform that prevents data & model performance degradation by allowing you to monitor your data and machine learning models in production. https://whylabs.ai/ Check out our open-source data & ML monitoring project: https://github.com/whylabs/whylogs Do you want to connect with the community, learn about WhyLabs, or get project support? Join the WhyLabs + Robust & Responsible AI community Slack: https://bit.ly/rsqrd-slack