Federated Learning (FL) is a distributed training paradigm wherein participants collaborate to build a global model while ensuring the privacy of the involved data, which remains stored on participant devices. However, proposals aiming to ensure such privacy also make it challenging to protect against potential attackers seeking to compromise the training outcome. In this context, we present Fast, Private, and Protected (FPP), a novel approach that aims to safeguard federated training while enabling secure aggregation to preserve data privacy. This is accomplished by evaluating rounds using participants' assessments and enabling training recovery after an attack. FPP also employs a reputation-based mechanism to mitigate the participation of attackers. We created a dockerized environment to validate the performance of FPP compared to other approaches in the literature (FedAvg, Power-of-Choice, and aggregation via Trimmed Mean and Median). Our experiments demonstrate that FPP achieves a rapid convergence rate and can converge even in the presence of malicious participants performing model poisoning attacks.
翻译:联邦学习(FL)是一种分布式训练范式,参与者在此过程中协作构建全局模型,同时确保所涉数据的隐私性,这些数据仍存储在参与者设备上。然而,旨在确保此类隐私性的方案也使得防范试图破坏训练结果的潜在攻击者变得具有挑战性。在此背景下,我们提出了快速、私有且受保护(FPP)这一新颖方法,旨在保障联邦训练的同时实现安全聚合以保护数据隐私。这是通过使用参与者的评估来验证训练轮次,并在攻击后实现训练恢复来实现的。FPP还采用基于信誉的机制来减少攻击者的参与。我们创建了一个容器化环境,以验证FPP与文献中其他方法(FedAvg、Power-of-Choice以及通过修剪均值和加权中值进行聚合)相比的性能。实验结果表明,FPP实现了快速的收敛速率,并且即使在存在恶意参与者执行模型投毒攻击的情况下也能收敛。