У нас вы можете посмотреть бесплатно The why of AI: Uncovering cause and effect in observational data или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Modern machine learning excels at identifying correlations. However, to make a real impact, we must understand causality, the “why” behind the data, and uncover the underlying causal mechanisms driving the observations. This is the core challenge addressed by causal discovery. This pursuit of causal understanding is foundational for the next generation of AI. It is the key to building genuinely explainable AI (XAI) that can justify its decisions with causal claims rather than just complex correlations. Furthermore, it is crucial for accelerating scientific progress, enabling researchers to unravel complex systems in fields ranging from medicine to economics. Although identifying causal links traditionally requires experiments (interventions), this is often impossible, impractical, or unethical. The central challenge, therefore, is learning cause-and-effect from purely observational data. In this talk, after briefly surveying the field, this talk discusses recent advances in this area, focusing on the fundamental problem of distinguishing cause from effect (i.e., does X→Y or Y→X?) from bivariate data. Session Objectives: By the end of this session, participants will be able to: Define the fundamental difference between observational data and interventional data, and why this distinction is critical. Explain why standard machine learning models based on correlation often fail to support effective interventions or explainability. Differentiate between simple statistical dependence and causal directionality. Identify the specific challenges and limitations of inferring causality when randomized experiments are impossible or unethical. Discuss modern methodological approaches used to determine the direction of dependence (i.e., distinguish “X causes Y” from “Y causes X”) in bivariate data. Speakers: Mario Figueiredo IST Distinguished Professor, Feedzai Chair on Machine Learning at Instituto Superior Técnico (IST), University of Lisbon Moderators: Arnout Devos Scientific Coordinator, European Laboratory for Learning and Intelligent Systems (ELLIS) AI for Good is identifying innovative AI applications, building skills and standards, and advancing partnerships to solve global challenges. AI for Good is organized by ITU in partnership with over 50 UN partners and co-convened with the Government of Switzerland. Join the Neural Network! 👉https://aiforgood.itu.int/neural-netw... The AI for Good networking community platform powered by AI. Designed to help users build connections with innovators and experts, link innovative ideas with social impact opportunities, and bring the community together to solve global challenges using AI. 🔴 Watch the latest #AIforGood videos! / aiforgood 📩 Stay updated and join our weekly AI for Good newsletter: http://eepurl.com/gI2kJ5 🗞Check out the latest AI for Good news: https://aiforgood.itu.int/newsroom/ 📱Explore the AI for Good blog: https://aiforgood.itu.int/ai-for-good... 🌎 Connect on our social media: Website: https://aiforgood.itu.int/ X: / aiforgood LinkedIn Page: / 26511907 LinkedIn Group: / 8567748 Instagram: / aiforgood Facebook: / aiforgood Disclaimer: The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.