У нас вы можете посмотреть бесплатно An Information-Theoretic Approach to Personalized Explanations of Machine Learning by Alexander Jung или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Subscribe to Data Science Portugal Youtube channel so you get notified of new exciting videos here: https://www.youtube.com/c/DataScience... 👇 Click for more information. In this video, you can find out more about An Information-Theoretic Approach to Personalized Explanations of Machine Learning by Alexander Jung. Automated decision making is used routinely throughout our every-day life. Recommender systems decide which jobs, movies, or other user profiles might be interesting to us. Spell checkers help us to make good use of language. Fraud detection systems decide if credit card transactions should be verified more closely. Many of these decision making systems use machine learning methods that fit complex models to massive datasets. The successful deployment of machine learning (ML) methods to many (critical) application domains crucially depend on its explainability. Indeed, humans have a strong desire to get explanations that resolve the uncertainty about experienced phenomena like the predictions and decisions obtained from ML methods. Explainable ML is challenging since explanations must be tailored (personalized) to individual users with varying backgrounds. Some users might have received university-level education in ML, while other users might have no formal training in linear algebra. Linear regression with few features might be perfectly interpretable for the first group but might be considered a black-box by the latter. We propose a simple probabilistic model for predictions and user knowledge. This model allows us to study explainable ML using information theory. Explaining is here considered as the task of reducing the “surprise” incurred by a prediction. We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction, given the user background. This session was part of DSPT Day Online (2020). Learn more about DSPT Day here: https://dsptday.com ► Learn more about Data Science Portugal: https://datascienceportugal.com ► Check out our upcoming events on our Meetup Page: https://www.meetup.com/pt-BR/datascie... ► Follow Data Science Portugal on Social Media: FB: / datascienceportugal LinkedIn: / datascienceportugal Twitter: / datascience_pt ► Join our community on Slack: https://datascienceportugal.herokuapp... #datascience #machinelearning #datascienceportugal