This website requires JavaScript.

SAGDA: Achieving $\mathcal{O}(\epsilon^{-2})$ Communication Complexity in Federated Min-Max Learning

Haibo YangZhuqing LiuXin ZhangJia Liu
Oct 2022
摘要
To lower the communication complexity of federated min-max learning, anatural approach is to utilize the idea of infrequent communications (throughmultiple local updates) same as in conventional federated learning. However,due to the more complicated inter-outer problem structure in federated min-maxlearning, theoretical understandings of communication complexity for federatedmin-max learning with infrequent communications remain very limited in theliterature. This is particularly true for settings with non-i.i.d. datasets andpartial client participation. To address this challenge, in this paper, wepropose a new algorithmic framework called stochastic sampling averaginggradient descent ascent (SAGDA), which i) assembles stochastic gradientestimators from randomly sampled clients as control variates and ii) leveragestwo learning rates on both server and client sides. We show that SAGDA achievesa linear speedup in terms of both the number of clients and local update steps,which yields an $\mathcal{O}(\epsilon^{-2})$ communication complexity that isorders of magnitude lower than the state of the art. Interestingly, by notingthat the standard federated stochastic gradient descent ascent (FSGDA) is infact a control-variate-free special version of SAGDA, we immediately arrive atan $\mathcal{O}(\epsilon^{-2})$ communication complexity result for FSGDA.Therefore, through the lens of SAGDA, we also advance the current understandingon communication complexity of the standard FSGDA method for federated min-maxlearning.
展开全部
图表提取

暂无人提供速读十问回答

论文十问由沈向洋博士提出,鼓励大家带着这十个问题去阅读论文,用有用的信息构建认知模型。写出自己的十问回答,还有机会在当前页面展示哦。

Q1论文试图解决什么问题?
Q2这是否是一个新的问题?
Q3这篇文章要验证一个什么科学假设?
0
被引用
笔记
问答