Federated learning has become a popular machine learning paradigm with many potential real-life applications, including recommendation systems, the Internet of Things (IoT), healthcare, and self-driving cars. Though most current applications focus on classification-based tasks, learning personalized generative models remains largely unexplored, and their benefits in the heterogeneous setting still need to be better understood. This work proposes a novel architecture combining global client-agnostic and local client-specific generative models. We show that using standard techniques for training federated models, our proposed model achieves privacy and personalization that is achieved by implicitly disentangling the globally-consistent representation (i.e. content) from the client-dependent variations (i.e. style). Using such decomposition, personalized models can generate locally unseen labels while preserving the given style of the client and can predict the labels for all clients with high accuracy by training a simple linear classifier on the global content features. Furthermore, disentanglement enables other essential applications, such as data anonymization, by sharing only content. Extensive experimental evaluation corroborates our findings, and we also provide partial theoretical justifications for the proposed approach.
翻译:联邦学习已成为一种流行的机器学习模式,它有许多潜在的现实应用,包括建议系统、物联网(IoT)、医疗保健和自行驾驶汽车等。尽管大多数目前的应用侧重于基于分类的任务,但学习个性化基因模型在很大程度上仍未被探索,在多样化环境中,这些模型的好处仍需要更好的理解。这项工作提出了一个将全球客户-不可知性和地方客户特有基因化模型相结合的新结构。我们表明,使用培训联邦模式的标准技术,我们提议的模型实现了隐私和个人化,其实现方式是隐含地将全球兼容性代表(即内容)与依赖客户的变异(即风格)区分开来。使用这种分解,个性化模型可以在维护客户特定风格的同时产生本地看不见的标签,并通过培训一个简单的全球内容分类线性模型来非常精确地预测所有客户的标签。此外,分解还能够实现其他基本应用,例如数据匿名化,仅分享内容。广泛的实验性评估证实了我们的调查结果,我们还为拟议的分类提供了部分理论理由。