Federated learning becomes a prominent approach when different entities want to learn collaboratively a common model without sharing their training data. However, Federated learning has two main drawbacks. First, it is quite bandwidth inefficient as it involves a lot of message exchanges between the aggregating server and the participating entities. This bandwidth and corresponding processing costs could be prohibitive if the participating entities are, for example, mobile devices. Furthermore, although federated learning improves privacy by not sharing data, recent attacks have shown that it still leaks information about the training data. This paper presents a novel privacy-preserving federated learning scheme. The proposed scheme provides theoretical privacy guarantees, as it is based on Differential Privacy. Furthermore, it optimizes the model accuracy by constraining the model learning phase on few selected weights. Finally, as shown experimentally, it reduces the upstream and downstream bandwidth by up to 99.9% compared to standard federated learning, making it practical for mobile systems.
翻译:当不同实体希望合作学习共同模式而没有共享培训数据时,联邦学习成为一个突出的方法。然而,联邦学习有两个主要缺点。第一,带宽效率相当低,因为它涉及总服务器和参与实体之间的大量信息交流。如果参与实体是移动设备等,这种带宽和相应的处理成本可能令人望而却步。此外,尽管联邦学习通过不共享数据而改善了隐私,但最近的袭击表明,它仍然泄漏了有关培训数据的信息。本文提出了一个新的隐私保护联合学习计划。拟议的计划提供了理论隐私保障,因为它以差异隐私为基础。此外,它通过限制模式学习阶段的少数选定重量来优化模型的准确性。最后,正如实验所显示的那样,它将上游和下游带宽比标准节能学习减少99.9%,使移动系统实用。