У нас вы можете посмотреть бесплатно Xin Guo: Mean-field multi-agent reinforcement learning: a decentralized network approach или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Atelier/Workshop: Jeux à champ moyen/Mean Field Games 15 Avril/April 15: http://www.crm.umontreal.ca/2022/Game... Xin Guo: One of the challenges for multi-agent reinforcement learning (MARL) is designing efficient learning algorithms for a large system in which each agent has only limited or partial information of the entire system. Furthermore, there has been little theoretical study for decentralized MARL for modeling self-driving vehicles, ridesharing, and data and traffic routing. We propose an MFG framework to study such an MARL with the network of states. The theoretical analysis consists of three key components: the first is the reformulation of the MARL system as a networked Markov decision process with teams of agents, enabling updating the associated team Q-function in a localized fashion; the second is the Bellman equation for the value function and the appropriate Q-function on the probability measure space; and the third is the exponential decay property of the team Q-function, facilitating its approximation with sample efficiency and controllable error. The theoretical analysis paves the way for a new algorithm where the actor-critic approach with overparameterized neural networks is proposed. The convergence and sample complexity are established and shown to be scalable with respect to the sizes of both agents and states