У нас вы можете посмотреть бесплатно Distribution Preference Optimization A Fine-grained Perspective for LLM Unlearning или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Paper: https://arxiv.org/abs/2510.04773 Title: Distribution Preference Optimization: A Fine-grained Perspective for LLM Unlearning Authors: Kai Qin, Jiaqi Wu, Jianxiang He, Haoyuan Sun, Yifei Zhao, Bin Liang, Yongzhe Chang, Tiantian Zhang, Houde Liu Abstract: As Large Language Models (LLMs) demonstrate remarkable capabilities learned from vast corpora, concerns regarding data privacy and safety are receiving increasing attention. LLM unlearning, which aims to remove the influence of specific data while preserving overall model utility, is becoming an important research area. One of the mainstream unlearning classes is optimization-based methods, which achieve forgetting directly through fine-tuning, exemplified by Negative Preference Optimization (NPO). However, NPO's effectiveness is limited by its inherent lack of explicit positive preference signals. Attempts to introduce such signals by constructing preferred responses often necessitate domain-specific knowledge or well-designed prompts, fundamentally restricting their generalizability. In this paper, we shift the focus to the distribution-level, directly targeting the next-token probability distribution instead of entire responses, and derive a novel unlearning algorithm termed \textbf{Di}stribution \textbf{P}reference \textbf{O}ptimization (DiPO). We show that the requisite preference distribution pairs for DiPO, which are distributions over the model's output tokens, can be constructed by selectively amplifying or suppressing the model's high-confidence output logits, thereby effectively overcoming NPO's limitations. We theoretically prove the consistency of DiPO's loss function with the desired unlearning direction. Extensive experiments demonstrate that DiPO achieves a strong trade-off between model utility and forget quality. Notably, DiPO attains the highest forget quality on the TOFU benchmark, and maintains leading scalability and sustainability in utility preservation on the MUSE benchmark. Tags: Machine Learning, Research, Mathematics, reinforcement learning, federated learning, gan, supervised, gradient descent, gru, attention, recommendation, search, distribution, preference, optimization, fine, grained, research paper, academic, study, analysis, tutorial, explained, breakdown, paper review, research summary, AI research, scientific paper, methodology, results, findings, innovation, technology, computing, algorithm, model, dataset, evaluation, performance, accuracy Welcome to the Mayuresh Shilotri's Youtube . Maintained by Mayuresh Shilotri You can follow me at Blog - https://shilotri.com/ LinkedIn - / mayureshshilotri Twitter - / mshilotri Note: I only claim to have read the research paper and created a Video using AI tool. I am not the author. All intellectual heavy lifting was performed by the respective authors. 🙏