Federated Learning (FL) has recently emerged as a popular framework, which allows resource-constrained discrete clients to cooperatively learn the global model under the orchestration of a central server while storing privacy-sensitive data locally. However, due to the difference in equipment and data divergence of heterogeneous clients, there will be parameter deviation between local models, resulting in a slow convergence rate and a reduction of the accuracy of the global model. The current FL algorithms use the static client learning strategy pervasively and can not adapt to the dynamic training parameters of different clients. In this paper, by considering the deviation between different local model parameters, we propose an adaptive learning rate scheme for each client based on entropy theory to alleviate the deviation between heterogeneous clients and achieve fast convergence of the global model. It's difficult to design the optimal dynamic learning rate for each client as the local information of other clients is unknown, especially during the local training epochs without communications between local clients and the central server. To enable a decentralized learning rate design for each client, we first introduce mean-field schemes to estimate the terms related to other clients' local model parameters. Then the decentralized adaptive learning rate for each client is obtained in closed form by constructing the Hamilton equation. Moreover, we prove that there exist fixed point solutions for the mean-field estimators, and an algorithm is proposed to obtain them. Finally, extensive experimental results on real datasets show that our algorithm can effectively eliminate the deviation between local model parameters compared to other recent FL algorithms.
翻译:联邦学习(FL)是一种流行的框架,可以让资源受限的离散客户端在中央服务器的协调下共同学习全局模型,同时在本地存储隐私敏感数据。但是,由于异构客户端的设备和数据差异,导致本地模型之间存在参数偏差,从而降低了全局模型的准确性和收敛速度。当前的FL算法广泛使用静态客户端学习策略,并且无法适应不同客户端的动态训练参数。本文通过考虑不同本地模型参数之间的偏差,提出了一种基于熵理论的自适应学习速率方案,以减轻异构客户机之间的偏差并实现全局模型的快速收敛。由于本地信息之间缺乏通信,在本地训练阶段尤其如此,因此对于每个客户端设计最佳的动态学习率是困难的。为了实现每个客户端的分散式学习率设计,我们首先引入均场方案来估计与其他客户端本地模型参数相关的项。然后利用哈密顿方程构造闭合形式的每个客户端的分散式自适应学习率。此外,我们证明了均场估计器存在固定点解,提出了一种算法来获取它们。最后,实验结果表明,相比其他近期的FL算法,我们的算法可以有效消除本地模型参数的偏差。