У нас вы можете посмотреть бесплатно Membership and Property Inference Attacks against Machine Learning Models или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
AI Centre Seminar Series (Recorded April 2020) Prof. Emiliano De Cristofaro is Head of the Information Security Research Group in UCL's Department of Computer Science This talk discusses recent results analyzing privacy leakage from machine learning models. First, we study "membership inference" attacks: given a data point, an adversary attempts to determines whether or not that was used to train the model. We do so, for the first time, on generative models. We then turn to federated learning, whereby multiple participants, each with his own training dataset, build a joint model by training local models and periodically exchanging updates. We demonstrate that these updates leak unintended information and leave the door open to both membership and property inference attacks (i.e., inferring properties that hold only for a subset of the training data). Finally, we present a novel technique for privately releasing generative models and entire high-dimensional datasets produced by these models.