У нас вы можете посмотреть бесплатно Shehzaad Dhuliawala - Chain-of-Verification Reduces Hallucinations in LLMs или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Description: Generation of plausible yet incorrect factual information, termed hallucination, is an unsolved issue in large language models. We study the ability of language models to deliberate on the responses they give in order to correct their mistakes. We develop the Chain-of-Verification (CoVe) method whereby the model first (i) drafts an initial response; then (ii) plans verification questions to fact-check its draft; (iii) answers those questions independently so the answers are not biased by other responses; and (iv) generates its final verified response. In experiments, we show CoVe decreases hallucinations across a variety of tasks, from list-based questions from Wikidata, closed book MultiSpanQA and longform text generation. About the speaker: "I am a PhD student at ETH Zürich where I am advised by Prof. Mrinmaya Sachan and Prof. Thomas HofmannBefore that, I spent two years as a Research Engineer at Microsoft Research Montréal where I worked with T.J. Hazen. Previously, I was a Master's student at UMass Amherst where I was advised by Prof. Andrew Mccallum.I am interested in building reasoning systems that are explainable, trustable, and robust to distributional shifts. Broadly, I work on machine learning and study its application in various fields of natural language processing.I am grateful to be a recipient of the IBM PhD fellowship 2021I'm spending the summer of 2023 at FAIR interning with Jason Weston" This session is brought to you by the Cohere For AI Open Science Community - a space where ML researchers, engineers, linguists, social scientists, and lifelong learners connect and collaborate with each other. Thank you to our Community Leads for organizing and hosting this event. If you’re interested in sharing your work, we welcome you to join us! Simply fill out the form at https://forms.gle/ALND9i6KouEEpCnz6 to express your interest in becoming a speaker. Join the Cohere For AI Open Science Community to see a full list of upcoming events: https://tinyurl.com/C4AICommunityApp.