联邦学习(Federated Learning)是一种新兴的人工智能基础技术,在 2016 年由谷歌最先提出,原本用于解决安卓手机终端用户在本地更新模型的问题,其设计目标是在保障大数据交换时的信息安全、保护终端数据和个人数据隐私、保证合法合规的前提下,在多参与方或多计算结点之间开展高效率的机器学习。其中,联邦学习可使用的机器学习算法不局限于神经网络,还包括随机森林等重要算法。联邦学习有望成为下一代人工智能协同算法和协作网络的基础。

VIP内容

联邦学习(Federated Learning)是一种新兴的保护隐私的机器学习范式,在学术界和行业中都引起了极大的关注。联邦学习的一大特征是异构性,它来源于参与学习的设备有各种硬件规格、且设备状态是动态变化的。异构性会对联邦学习训练过程产生巨大影响,例如,导致设备无法进行训练或无法上载其模型更新。不幸的是,这种影响尚未在现有的联邦学习文献中进行过系统的研究和量化。本文进行了第一个联邦学习中异构性影响的实证研究。本文从13.6万部智能手机中收集了大量数据,这些数据可以真实地反映现实环境中的异构性。本文还构建了一个符合标准联邦学习协议同时考虑了异构性的联邦学习平台。基于以上数据和平台进行了广泛的实验,以比较目前最优的联邦学习算法在考虑异构性和不考虑异构性下的性能。结果表明,异构性导致联邦学习的性能显着下降,包括高达9.2%的准确度下降,2.32倍的训练时间延长以及公平性受损。此外,本文进行了原因分析,发现设备故障和参与偏差是导致性能下降的两个潜在根本原因。我们的研究对联邦学习从业者具有深刻的启示。一方面,本文的发现表明联邦学习算法设计师在模型评估过程中有必要考虑异构性。另一方面,本文的发现敦促联邦学习的系统设计者设计特定的机制来减轻异构性的影响。中心博士生杨程旭为该文第一作者。

成为VIP会员查看完整内容
0
15

最新内容

We study the optimization aspects of personalized Federated Learning (FL). We develop a universal optimization theory applicable to all strongly convex personalized FL models in the literature. In particular, we propose a general personalized objective capable of recovering essentially any existing personalized FL objective as a special case. We design several optimization techniques to minimize the general objective, namely a tailored variant of Local SGD and variants of accelerated coordinate descent/accelerated SVRCD. We demonstrate the practicality and/or optimality of our methods both in terms of communication and local computation. Surprisingly enough, our general optimization theory is capable of recovering best-known communication and computation guarantees for solving specific personalized FL objectives.

0
0
下载
预览

最新论文

We study the optimization aspects of personalized Federated Learning (FL). We develop a universal optimization theory applicable to all strongly convex personalized FL models in the literature. In particular, we propose a general personalized objective capable of recovering essentially any existing personalized FL objective as a special case. We design several optimization techniques to minimize the general objective, namely a tailored variant of Local SGD and variants of accelerated coordinate descent/accelerated SVRCD. We demonstrate the practicality and/or optimality of our methods both in terms of communication and local computation. Surprisingly enough, our general optimization theory is capable of recovering best-known communication and computation guarantees for solving specific personalized FL objectives.

0
0
下载
预览
Top