У нас вы можете посмотреть бесплатно When AI Takes the Easy Way Out: Algorithmic Shortcuts That Undermine Medical AI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Imagine you are developing an AI system to predict which patients are at risk of becoming obese based on their lifestyle factors. You gather data on diet, exercise habits, sleep patterns, stress levels, and dozens of other variables. You train your model. It achieves 99% accuracy. You celebrate. Then someone points out that you included the patients' current weight in your dataset. Your model did not learn anything about lifestyle risk factors. It learned to calculate BMI. It took a shortcut. And that shortcut rendered your entire effort clinically useless. This is the problem of algorithmic shortcuts in medical AI, and it is flooding our research literature with impressive-looking results that will crumble the moment they encounter real patients. Machine learning models are optimization engines. They will find the easiest path to high accuracy, whether or not that path has any clinical meaning. When your training data contains features that essentially give away the answer, the model will exploit them ruthlessly. This is not a bug. It is exactly what the algorithm is designed to do. The problem is that we, the humans, failed to recognize that we handed the model an answer key along with the exam. Consider what happens when you include a "diabetes medication" column in a model designed to predict diabetes. The model quickly learns: if this column says "metformin," predict diabetes. It achieves near-perfect accuracy. But it has learned nothing useful. If you already know the patient is on diabetes medication, you do not need AI to tell you they have diabetes. You need AI to identify patients before they develop the condition, when intervention can still make a difference. This is the fundamental paradox: the features that make prediction easiest are often the features that make prediction pointless.