У нас вы можете посмотреть бесплатно Going Back and Beyond: Emerging (Old) Threats in LLM Privacy and Poisoning или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
A Google TechTalk, 2025-06-25, presented by Robin Staab Privacy in ML Seminar. ABSTRACT: The rapid adoption of Generative AI (GenAI) and Large Language Model (LLM)-driven applications has led to users increasingly sharing personal data with these systems. However, research examining the privacy implications of LLMs has typically focused on specific, well-defined scenarios, such as the memorization of training data. In the first part of this talk, I will present our research, which expands beyond traditional memorization concerns by utilizing the inferential capabilities of LLMs, instantiating them as reconstruction adversaries as known from prior work in ML Fairness or data minimization. Specifically, we demonstrate how adversaries can leverage LLMs to reconstruct sensitive user attributes from online textual data and discuss potential defensive measures to mitigate such privacy risks. With a similar mindset, in the second part, I present some of our recent work exploring additional threats arising during practical LLM deployments. In particular, going beyond data poisoning, we show how common deployment practices such as quantization and model finetuning can be exploited to introduce realistic and stealthy backdoor attacks into LLMs. Our findings underscore the importance of considering a broader threat model to ensure the security and privacy of LLM-driven systems. Speaker: Robin Staab is a second-year PhD student at the SRILab @ ETH Zurich advised by Martin Vechev and Florian Tramèr, focusing on LLM privacy and safety.