У нас вы можете посмотреть бесплатно Is medical image analysis with deep learning ready? или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Abstract: This talk will discuss challenges when deploying medical imaging AI tools in terms of data, models, and the applications themselves. Then concludes with some applications that appear to overcome these challenges. Some points from the talk are as follows: 1) Decision-support systems or clinical prediction tools based on machine learning (including the special case of deep learning) are similar to clinical support tools developed using classical statistical models and, as such, have similar limitations. 2) If a machine-learned model is trained using data that does not match the data it will encounter when deployed, its performance may be lower than expected. This data can change over time leading to the "day 2" problem once tools are deployed. 3) When training, machine learning algorithms take the “path of least resistance", leading them to learn features from the data that are spuriously correlated with target outputs instead of the correct features; this can impair the effective generalization of the resulting learned model. 4) In terms of applications, it may seem appealing to perform image synthesis or superresolution because they can produce high quality images. However, the basis of how these image synthesis models work is through matching the translation output to the distribution of the target domain. This can cause an issue when the data provided in the target domain has an over or under representation of some classes (e.g. healthy or sick) leading to a safety issue if these images are used for a general interpretation. Bio: Joseph Paul Cohen is a researcher and engineer currently focuses on the challenges in deploying AI tools in medicine specifically computer vision and genomics. Joseph currently works at Butterfly Networks, a portable ultrasound manufacturer, where he works with the deep learning team to develop new AI tools for ultrasound. Prior to that, Joseph was a postdoc at Stanford University in the Center for Artificial Intelligence in Medicine & Imaging working on tools for chest X-ray analysis with AI. And prior to that a postdoc at Mila, the Quebec AI Institute, where he led the medical deep learning research group. Learning objectives: 1) Shifts in data may negatively impact the performance of a model so it is important to monitor for this. 2) Models may use features which are spuriously correlated with a true pathology in order to make predictions so it is important to inspect and detect this. 3) Interpretable models can minimize a user believing an incorrect prediction. However, saliency maps may be misleading. Questions the attendee should reflect on before attending: 1) Pick an application of AI. What image features could a model use to predict instead of the true pathology. How could you be sure a model is not looking at them? 2) Consider a super-resolution task (such as what is discussed around MRI). Can this be safe to use for general interpretation? Where is the extra information coming from? Why should this approach work on new data for patients with a novel presentation of a disease? 0:00 - Intro 0:41 - Flawed Data 7:37 - Flawed Models 20:03 - Flawed Applications 27:19 - What is Ready? Related paper: Problems in the deployment of machine-learned models in health care Canadian Medical Association Journal, 2021 https://www.cmaj.ca/content/193/35/E1391