У нас вы можете посмотреть бесплатно Beyond Accuracy: How to Evaluate AI Diagnostic Tools Before Trusting Them With Patient Care или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
High accuracy numbers in AI diagnostic studies can be dangerously misleading. In this comprehensive presentation, I reveal why the most accurate model in our retinal pathology study (91% accuracy) was actually the LEAST suitable for clinical use, while the model with lower headline accuracy (76%) was the only one we could trust with patient care. What You'll Learn: Why accuracy alone is meaningless without analyzing learning dynamics The two essential criteria every clinical AI model must meet: healthy learning curves AND monotonic performance decrease. How to spot data leakage, overfitting, and other red flags in published AI studies. Why many peer-reviewed studies still don't meet rigorous evaluation standards. Practical checklists for evaluating AI tools before deploying them in your practice. Real case study: OCT retinal pathology classification using multiple ML approaches. Key Takeaways: ✓ Models without healthy learning dynamics should be disqualified from patient care—regardless of reported accuracy ✓ Non-monotonic performance patterns signal serious methodological flaws ✓ Your responsibility as a clinician: understand and verify AI models before trusting them with patients ✓ Current reality: evaluation standards aren't yet formalized in this rapidly evolving field About This Course: I teach "AI-Assisted Medical Diagnostics" at New York Institute of Technology's College of Osteopathic Medicine, where medical students develop their own AI-powered diagnostic tools and learn to rigorously evaluate them for real-world readiness, i.e., a critical skill every future physician needs. Book Reference: For comprehensive checklists and detailed evaluation frameworks, see my book "AI-Assisted Medical Diagnostics," particularly Chapters 3, 8, and 9. Why This Matters: If we keep publishing and deploying poorly validated AI models, clinicians will notice the failures and lose trust in AI diagnostics. Once that trust is lost, it takes years—sometimes decades—to rebuild. We're at a critical juncture where methodological rigor today determines whether this technology succeeds or fails tomorrow. Remember: In clinical machine learning, patient safety must always come before impressive metrics. Don't be dazzled by high accuracy claims—demand the evidence of healthy learning dynamics and robust generalization.