Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Unifying Online and Counterfactual Learning to Rank - WSDM 2021 в хорошем качестве

Unifying Online and Counterfactual Learning to Rank - WSDM 2021 4 года назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



Unifying Online and Counterfactual Learning to Rank - WSDM 2021

The WSDM'21 pre-recorded presentation for our full paper: Unifying Online and Counterfactual Learning to Rank A Novel Counterfactual Estimator that Effectively Utilizes Online Interventions Harrie Oosterhuis and Maarten de Rijke PDF, slides and poster are available here: https://harrieo.github.io//publicatio... Code available here: https://github.com/HarrieO/2021wsdm-u... Follow us on twitter:   / harrieoos   and   / mdr   Paper abstract: Counterfactual Learning to Rank (LTR) methods optimize ranking systems using logged user interactions that contain interaction biases. Existing methods are only unbiased if users are presented with all relevant items in every ranking. There is currently no existing counterfactual unbiased LTR method for top-k rankings. We introduce a novel policy-aware counterfactual estimator for LTR metrics that can account for the effect of a stochastic logging policy. We prove that the policy-aware estimator is unbiased if every relevant item has a non-zero probability to appear in the top-k ranking. Our experimental results show that the performance of our estimator is not affected by the size of k: for any k, the policy-aware estimator reaches the same retrieval performance while learning from top-k feedback as when learning from feedback on the full ranking. Lastly, we introduce novel extensions of traditional LTR methods to perform counterfactual LTR and to optimize top-k metrics. Together, our contributions introduce the first policy-aware unbiased LTR approach that learns from top-k feedback and optimizes top-k metrics. As a result, counterfactual LTR is now applicable to the very prevalent top-k ranking setting in search and recommendation. Video references: A. Agarwal, X. Wang, C. Li, M. Bendersky, and M. Najork. Addressing trust bias for unbiased learning-to-rank. In The World Wide Web Conference, pages 4–14. ACM, 2019. O. Chapelle and Y. Chang. Yahoo! Learning to Rank Challenge Overview. Journal of Machine Learning Research, 14:1–24, 2011. N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In Proceedings of the 2008 international conference on web search and data mining, pages 87–94, 2008. T. Joachims, A. Swaminathan, and T. Schnabel. Unbiased learning-to-rank with biased feedback. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, pages 781–789. ACM, 2017. H. Oosterhuis and M. de Rijke. Policy-aware unbiased learning to rank for top-k rankings. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 489–498. ACM, 2020. Z. Ovaisi, R. Ahsan, Y. Zhang, K. Vasilaky, and E. Zheleva. Correcting for selection bias in learning-to-rank systems. In Proceedings of The Web Conference 2020, pages 1863–1873, 2020. A. Vardasbi, H. Oosterhuis, and M. de Rijke. When inverse propensity scoring does not work: Affine corrections for unbiased learning to rank. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, 2020. Chapters -------------------------- 0:00 Title and Authors 0:30 Introduction: Unbiased Learning to Rank 1:28 Forms of Bias 2:27 Intervention-Oblivious Estimator 4:57 Basic Example 8:35 Intervention-Aware Estimator 10:05 Basic Example Revisited 11:33 Experimental Setup 11:56 Results: Comparison with Counterfactual Methods 14:25 Results: Comparison with Online Methods 17:11 Conclusion

Comments