У нас вы можете посмотреть бесплатно Ep 67 - RAG Done Right: Measure The Evidence Or Drift Into Error или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Send a text (https://www.buzzsprout.com/twilio/tex...) What happens when a brilliant-sounding AI gives the wrong answer with total confidence? We dig into the quiet culprit behind so many “LLM failures”: retrieval. Rather than judging how smart a model sounds, we walk through how to judge whether it looked at the right evidence, why that matters in high-stakes domains like finance, healthcare, HR, and government, and how leaders can stop organizational drift driven by outdated or partial sources. We break down four pillars every RAG team should track: retrieval precision and recall to balance noise versus coverage; context relevance and coverage to ensure the retrieved passages actually answer the question; groundedness and fluency so every claim traces back to evidence; and accuracy and completeness to catch stale or missing knowledge. Along the way, we share real-world patterns—chatbots citing old HR policies, assistants using superseded regulations, and tools surfacing obsolete medical guidance—and show how these errors spread when confidence outruns curation. Then we get practical. We outline precision@K and recall@K, golden question sets tied to authoritative documents, LLM-based judging for relevance and groundedness, and continuous regression testing as knowledge bases evolve. More importantly, we frame the cultural shift: assign ownership for knowledge freshness, make sources visible next to answers, and normalize verification at every level. Treat AI answers as drafts, retrieval as evidence, and evaluation as the safeguard. If you’re running or planning a RAG system, start by asking to see retrieved sources, build a small high-stakes golden set, and set a cadence for archiving and updates. If this conversation helped sharpen your approach to reliable AI, subscribe, share with a teammate who manages content or compliance, and leave a quick review with one insight you’re taking back to your team. Want to join a community of AI learners and enthusiasts? AI Ready RVA (https://aireadyrva.com/) is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member (https://aireadyrva.com/membership-opt...) and support our AI literacy initiatives.