In federated learning, multiple parties collaborate in order to train a global model over their respective datasets. Even though cryptographic primitives (e.g., homomorphic encryption) can help achieve data privacy in this setting, some partial information may still be leaked across parties if this is done non-judiciously. In this work, we study the federated learning framework of SecureBoost [Cheng et al., FL@IJCAI'19] as a specific such example, demonstrate a leakage-abuse attack based on its leakage profile, and experimentally evaluate the effectiveness of our attack. We then propose two secure versions relying on trusted execution environments. We implement and benchmark our protocols to demonstrate that they are 1.2-5.4X faster in computation and need 5-49X less communication than SecureBoost.
翻译:在联合学习中,多个当事方合作,以训练一个针对各自数据集的全球模型。尽管加密原始(如同质加密)可以帮助实现在这一环境中的数据隐私,但如果非以不具有判断性的方式这样做,部分信息仍可能泄露到各缔约方之间。在这项工作中,我们研究安全博奥斯特[Cheng等人,FL@IJCAI'19]的联邦学习框架,将其作为具体的例子,表明基于其渗漏特征的渗漏-滥用攻击,并实验性地评估我们攻击的实效。我们随后提出两个依赖可信赖的执行环境的安全版本。我们执行并设定我们的协议,以证明它们在计算中速度快于1.2-5.4X,比安全博斯特需要5-49X的通信量少5-49X。