У нас вы можете посмотреть бесплатно Privacy Amplification from Structured Algorithmic Randomness или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Ayfer Ozgur (Stanford University) https://simons.berkeley.edu/talks/ayf... Learning from Heterogeneous Sources Differentially private training methods typically rely on injecting external noise at each iteration, as in DP-SGD, to limit the influence of individual data points. In this talk, we will explore how inherent algorithmic randomness already embedded in modern AI training pipelines for non-privacy reasons can be harnessed for privacy amplification, thereby reducing reliance on externally injected noise. Prior work has studied privacy amplification through user or data subsampling, but largely under idealized assumptions such as independent Poisson subsampling. In practice, training pipelines exhibit more structured, system-driven forms of randomness. The goal of this talk is twofold: first, to move beyond idealized subsampling models toward structured sampling mechanisms that better reflect real-world constraints; and second, to investigate additional sources of algorithmic randomness, including model partitioning, dropout, and compression, that naturally limit how much information any single sample or user contributes to the final model. We will discuss how these mechanisms can be rigorously quantified to strengthen privacy guarantees at scale.