У нас вы можете посмотреть бесплатно Machine Unlearning: An Emerging Fundamental Technology | Peter Triantafillou, University of Warwick или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
#machineunlearning #machinelearning #artificialintelligence Machine Unlearning: An Emerging Fundamental Technology | Peter Triantafillou, University of Warwick Are you ready for the dark side of Machine Learning? The EVIL TWIN of Machine Learning is here, and it's changing the game! In this video, we'll dive into the latest advancements in AI and explore the good, the bad, and the ugly of this emerging technology. From bias in algorithms to the potential misuse of Machine Learning, we'll uncover the darker side of this powerful tool. Join us as we explore the EVIL TWIN of Machine Learning and what it means for our future. Follow SAI Conferences on Linkedin: / saiconference Conference Website: https://saiconference.com/FTC Peter Triantafillou is a Professor of Data Systems and Head of the Data Sciences and Machine Learning Research Theme at the Department of Computer Science at the University of Warwick. He co-led the PVLDB Reproducibility effort (2018-2023), has been a Fellow of the Alan Turing Institute (2018-2023), and a member of the Advisory Board of PVLDB (2019-2023). Peter received his PhD in computer science from the University of Waterloo and was the Department of Computer Science and the Faculty of Mathematics nominee for the Gold Medal for outstanding achievements at the Doctoral level. Peter's research has won numerous awards, including the most influential paper award in ACM DEBS 2019, the best paper award at the ACM SIGIR 2016, the best paper award at the ACM CIKM Conference 2006, and the best student paper award at the IEEE Big Data 2018. Peter has served in the Technical Program Committees of more than 150 international conferences and has been the PC Chair or Vice-chair/Associate Editor in several prestigious conferences (including ACM SIGMOD, IEEE ICDE, PVLDB Reproducibility, VLDB, IEEE DSAA, EDBT, Middleware, and currently he is serving as a General co-chair for VLDB 2025). In this keynote presentation, Peter Triantafillou dives deep into the concept of machine unlearning, a critical area of research in modern AI. He explores how AI models, trained on vast datasets, can be affected by problematic data, such as biased, obsolete, or sensitive information, and how these issues can impact societal systems like healthcare, justice, and energy infrastructure. Peter Triantafillou explains the process of "unlearning," where models are adjusted to remove the harmful effects of problematic data without retraining from scratch—an approach that saves considerable time and computational resources. He also discusses the challenges of defining and measuring unlearning in AI systems, touching on areas like image classification, natural language processing (NLP), and large language models. Throughout the talk, Peter emphasizes the need for efficient unlearning algorithms that maintain model performance while addressing privacy, accuracy, and generalization issues. This cutting-edge research aims to mitigate risks in AI deployment and ensure the safety of critical infrastructures. Thanks For Watching! 👉 If you like this video, please like it and share it. 👉 Don't forget to subscribe for more updates. / @saiconference Suggested videos for you: 1. • Faster Iterations & AI Creativity in ... 2. • Infrastructure Monitoring with Remote... 3. • Exploring Chaotic Neural Networks for... ➖➖➖➖➖➖➖➖➖➖➖➖➖➖ ⭕ If you like Channel: ✅ Like ✅ Share ✅ Comment ✅ SUBSCRIBE for more videos Also click on the notification 🔔 icon. Thanks for watching...❕ ➖➖➖➖➖➖➖➖➖➖➖➖➖➖➖ Key Topics Covered: Machine unlearning and its importance in AI safety Data bias, obsolescence, and privacy concerns Unlearning algorithms and model recycling Image data and NLP applications Large language models and the issue of memorization If you're interested in the future of AI safety, this video offers a fascinating exploration of how unlearning can address the pressing challenges posed by problematic data in machine learning