This website requires JavaScript.

# SAGDA: Achieving $\mathcal{O}(\epsilon^{-2})$ Communication Complexity in Federated Min-Max Learning

Oct 2022

To lower the communication complexity of federated min-max learning, anatural approach is to utilize the idea of infrequent communications (throughmultiple local updates) same as in conventional federated learning. However,due to the more complicated inter-outer problem structure in federated min-maxlearning, theoretical understandings of communication complexity for federatedmin-max learning with infrequent communications remain very limited in theliterature. This is particularly true for settings with non-i.i.d. datasets andpartial client participation. To address this challenge, in this paper, wepropose a new algorithmic framework called stochastic sampling averaginggradient descent ascent (SAGDA), which i) assembles stochastic gradientestimators from randomly sampled clients as control variates and ii) leveragestwo learning rates on both server and client sides. We show that SAGDA achievesa linear speedup in terms of both the number of clients and local update steps,which yields an $\mathcal{O}(\epsilon^{-2})$ communication complexity that isorders of magnitude lower than the state of the art. Interestingly, by notingthat the standard federated stochastic gradient descent ascent (FSGDA) is infact a control-variate-free special version of SAGDA, we immediately arrive atan $\mathcal{O}(\epsilon^{-2})$ communication complexity result for FSGDA.Therefore, through the lens of SAGDA, we also advance the current understandingon communication complexity of the standard FSGDA method for federated min-maxlearning.

Q1论文试图解决什么问题？
Q2这是否是一个新的问题？
Q3这篇文章要验证一个什么科学假设？
0