У нас вы можете посмотреть бесплатно Vallijah Subasri - The Potential for Near Term AI Risks to Evolve into Existential Threats in Health или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
AI Safety Talk: Healthcare AI - Bridging Near-Term & Existential Risks AI safety concerns aren't limited to far-future existential threats—they're evolving from today's immediate challenges. Join Dr. Vallijah Subasri as she discusses her recent BMJ publication examining how current healthcare AI risks could cascade into more serious long-term consequences. Dr. Subasri, AI Scientist at Peter Munk Cardiac Centre, will explore how issues like algorithmic bias, privacy vulnerabilities, clinical over-reliance on AI, and lack of transparency connect to broader societal impacts. She'll share practical mitigation strategies based on her expertise in responsible AI deployment and fairness in healthcare systems. Dr. Vallijah Subasri is an AI Scientist at Peter Munk Cardiac Centre where she leverages artificial intelligence to develop strategies and tools for precision medicine in cardiovascular diseases. She completed her PhD at the University of Toronto and Hospital for Sick Children, where her research focused on using genomics and machine learning to understand the clinical heterogeneity of pediatric cancers and develop risk prediction models for personalized patient management. Previously, Dr. Subasri worked at the Vector Institute studying ways to ensure the responsible deployment of clinical machine learning models, with a focus on distribution shifts and fairness. She holds an HBSc in Biomedical Science & Computer Science from Western University. This session is brought to you by the Cohere Labs Open Science Community - a space where ML researchers, engineers, linguists, social scientists, and lifelong learners connect and collaborate with each other. We'd like to extend a special thank you to Alif Munim and Abrar Frahman, Leads of our AI Safety and Alignment group for their dedication in organizing this event. If you’re interested in sharing your work, we welcome you to join us! Simply fill out the form at https://forms.gle/ALND9i6KouEEpCnz6 to express your interest in becoming a speaker. Join the Cohere Labs Open Science Community to see a full list of upcoming events (https://tinyurl.com/CohereLabsCommuni....