У нас вы можете посмотреть бесплатно MedAI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Title: Improving Large Language Models for Clinical Named Entity Recognition via Prompt Engineering Speaker: Yan Hu Abstract: Objective: This study quantifies the capabilities of GPT-3.5 and GPT-4 for clinical named entity recognition (NER) tasks and proposes task-specific prompts to improve their performance. Materials and Methods: We evaluated these models on two clinical NER tasks: (1) to extract medical problems, treatments, and tests from clinical notes in the MTSamples corpus, following the 2010 i2b2 concept extraction shared task, and (2) identifying nervous system disorder-related adverse events from safety reports in the vaccine adverse event reporting system (VAERS). To improve the GPT models' performance, we developed a clinical task-specific prompt framework that includes (1) baseline prompts with task description and format specification, (2) annotation guideline-based prompts, (3) error analysis-based instructions, and (4) annotated samples for few-shot learning. We assessed each prompt's effectiveness and compared the models to BioClinicalBERT. Results: Using baseline prompts, GPT-3.5 and GPT-4 achieved relaxed F1 scores of 0.634, 0.804 for MTSamples, and 0.301, 0.593 for VAERS. Additional prompt components consistently improved model performance. When all four components were used, GPT-3.5 and GPT-4 achieved relaxed F1 socres of 0.794, 0.861 for MTSamples and 0.676, 0.736 for VAERS, demonstrating the effectiveness of our prompt framework. Although these results trail BioClinicalBERT (F1 of 0.901 for the MTSamples dataset and 0.802 for the VAERS), it is very promising considering few training samples are needed. Conclusion: While direct application of GPT models to clinical NER tasks falls short of optimal performance, our task-specific prompt framework, incorporating medical knowledge and training samples, significantly enhances GPT models' feasibility for potential clinical applications. Speaker Bio: Yan Hu is a 4th year PhD student at the University of Texas Health Science Center at Houston, specializing in Natural Language Processing (NLP) in the biomedical domain. Yan’s current research focuses on leveraging and optimizing large language models (LLMs) for clinical applications, with a particular emphasis on clinical information extraction. ------ The MedAI Group Exchange Sessions are a platform where we can critically examine key topics in AI and medicine, generate fresh ideas and discussion around their intersection and most importantly, learn from each other. We will be having weekly sessions where invited speakers will give a talk presenting their work followed by an interactive discussion and Q&A. Our sessions are held every Monday from 1pm-2pm PST. To get notifications about upcoming sessions, please join our mailing list: https://mailman.stanford.edu/mailman/... For more details about MedAI, check out our website: https://medai.stanford.edu. You can follow us on Twitter @MedaiStanford Organized by members of the Rubin Lab (http://rubinlab.stanford.edu) and Machine Intelligence in Medicine and Imaging (MI-2) Lab: Nandita Bhaskhar (https://www.stanford.edu/~nanbhas) Amara Tariq ( / amara-tariq-475815158 ) Avisha Das (https://dasavisha.github.io/)