Federated learning (FL) is an emerging distributed machine learning paradigm that avoids data sharing among training nodes so as to protect data privacy. Under coordination of the FL server, each client conducts model training using its own computing resource and private data set. The global model can be created by aggregating the training results of clients. To cope with highly non-IID data distributions, personalized federated learning (PFL) has been proposed to improve overall performance by allowing each client to learn a personalized model. However, one major drawback of a personalized model is the loss of generalization. To achieve model personalization while maintaining generalization, in this paper, we propose a new approach, named PFL-MoE, which mixes outputs of the personalized model and global model via the MoE architecture. PFL-MoE is a generic approach and can be instantiated by integrating existing PFL algorithms. Particularly, we propose the PFL-MF algorithm which is an instance of PFL-MoE based on the freeze-base PFL algorithm. We further improve PFL-MF by enhancing the decision-making ability of MoE gating network and propose a variant algorithm PFL-MFE. We demonstrate the effectiveness of PFL-MoE by training the LeNet-5 and VGG-16 models on the Fashion-MNIST and CIFAR-10 datasets with non-IID partitions.
翻译:联邦学习(FL)是新兴的分散式机器学习模式,避免培训节点之间共享数据,从而保护数据隐私。在FL服务器的协调下,每个客户利用自己的计算资源和私人数据集进行示范培训。全球模式可以通过汇集客户的培训结果来创建。为了应对高度非IID数据分布,已提议个人化的联邦学习(PFL)以改善总体业绩,让每个客户学习个性化模式。然而,个性化模式的一个主要缺点是丧失通用化。为了在保持普遍性的同时实现模型个人化,我们在本文件中提出一个新的方法,即PFL-MOE,通过MOE结构将个性化模型和全球模型的产出混合起来。PFL-MOE是一种通用方法,通过整合现有的PFLL算法来提高总体业绩。我们提议PFL-ML算法,这是基于冻结基础PFLF的普及-MoE算法的一种实例。我们通过提高MOE GFAR-MFAR网络的决策能力,并提议以FLFFM-FMAS-FM-FRO-FMAS-FMAS-FM-FML的模型展示非FL-FLA-FML-FM-FM-FL-FML-FL-FM-FML-FML-FM-FM-F-F-FMLMLMLA的模型的模型的变式的变价。