We study the optimization aspects of personalized Federated Learning (FL). We develop a universal optimization theory applicable to all convex personalized FL models in the literature. In particular, we propose a general personalized objective capable of recovering essentially any existing personalized FL objective as a special case. We design several optimization techniques to minimize the general objective, namely a tailored variant of Local SGD and variants of accelerated coordinate descent/accelerated SVRCD. We demonstrate the practicality and/or optimality of our methods both in terms of communication and local computation. Lastly, we argue about the implications of our general optimization theory when applied to solve specific personalized FL objectives.
翻译:我们研究了个性化联邦学习(FL)的优化方面。我们开发了一种适用于文献中所有精细个性化FL模型的普遍优化理论。特别是,我们提出了一个一般性的个性化目标,能够将任何现有的个性化FL目标基本上作为特例加以恢复。我们设计了几种优化技术,以尽量减少总目标,即当地SGD的量身定制变体和加速协调血统/加速的SVRCD的变体。我们从通信和地方计算两方面展示了我们方法的实用性和/或最佳性。最后,我们争论了我们的一般优化理论在用于解决个性化FL具体目标时的影响。