У нас вы можете посмотреть бесплатно Evaluating CNN, RNN, Transformer Architectures for Chatbot Medical Question Tagging CSCI E-89B NLP или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Here is my presentation on a multi-label NLP project I completed for CSCI E-89B Natural Language Processing by Professor Dmitry Kurochkin, PhD, for my Master, Data Science at Harvard Extension School, Harvard University. My project focuses on assigning multiple medical topic tags to short, noisy patient questions. I explored five model versions, ranging from lightweight recurrent networks to DistilBERT-based architectures, and compared their behavior using ranking metrics, precision-recall analysis, and threshold tuning. Key takeaways include: Mask-aware pooling and calibration matter more than model size Transformer encoders provide strong ranking, but smaller models can remain competitive Micro AUC alone is not enough; per-label precision–recall reveals real deployment tradeoffs My project tackles multi-label classification of medical questions and compares transformer-based and recurrent neural network approaches under real-world constraints like noisy text, label imbalance, and ambiguous ground truth. Beyond model performance, my presentation focuses on: How evaluation metrics influence design decisions Why threshold calibration matters in production systems When smaller, interpretable models can be preferable to larger ones My presentation walks through the modeling choices, evaluation strategy, and lessons learned, with visual explanations of why certain architectures perform better under imbalance. Your feedback is very welcome. Thank you for viewing my presentation!