This website requires JavaScript.

Independent and Decentralized Learning in Markov Potential Games

Chinmay MaheshwariManxi WuDruv PaiShankar Sastry
May 2022
摘要
We propose a multi-agent reinforcement learning dynamics, and analyze itsconvergence properties in infinite-horizon discounted Markov potential games.We focus on the independent and decentralized setting, where players can onlyobserve the realized state and their own reward in every stage. Players do nothave knowledge of the game model, and cannot coordinate with each other. Ineach stage of our learning dynamics, players update their estimate of aperturbed Q-function that evaluates their total contingent payoff based on therealized one-stage reward in an asynchronous manner. Then, playersindependently update their policies by incorporating a smoothed optimalone-stage deviation strategy based on the estimated Q-function. A key featureof the learning dynamics is that the Q-function estimates are updated at afaster timescale than the policies. We prove that the policies induced by ourlearning dynamics converge to a stationary Nash equilibrium in Markov potentialgames with probability 1. Our results build on the theory of two timescaleasynchronous stochastic approximation, and new analysis on the monotonicity ofpotential function along the trajectory of policy updates in Markov potentialgames.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答