In the federated learning setting, multiple clients jointly train a model under the coordination of the central server, while the training data is kept on the client to ensure privacy. Normally, inconsistent distribution of data across different devices in a federated network and limited communication bandwidth between end devices impose both statistical heterogeneity and expensive communication as major challenges for federated learning. This paper proposes an algorithm to achieve more fairness and accuracy in federated learning (FedFa). It introduces an optimization scheme that employs a double momentum gradient, thereby accelerating the convergence rate of the model. An appropriate weight selection algorithm that combines the information quantity of training accuracy and training frequency to measure the weights is proposed. This procedure assists in addressing the issue of unfairness in federated learning due to preferences for certain clients. Our results show that the proposed FedFa algorithm outperforms the baseline algorithm in terms of accuracy and fairness.
翻译:在联合学习环境中,多个客户在中央服务器的协调下联合培训一个模型,同时将培训数据保留在客户身上,以确保隐私;通常,在一个联合网络的不同设备中,数据分布不均,终端设备之间的通信带宽有限,使统计差异和昂贵的通信成为联合学习的主要挑战;本文件提出一种算法,以便在联合学习中实现更公平和更准确的算法(Fedfa);它引入了一种使用双动度梯度的优化计划,从而加快模型的趋同率;提出了一种适当的加权选择算法,将培训准确度和培训频率的信息数量结合起来,以衡量加权数;这一程序有助于解决由于偏好某些客户而导致的联邦学习不公的问题;我们的结果显示,拟议的Fedfa算法在准确和公平性方面超过了基线算法。