联邦学习(Federated Learning)是一种新兴的人工智能基础技术,在 2016 年由谷歌最先提出,原本用于解决安卓手机终端用户在本地更新模型的问题,其设计目标是在保障大数据交换时的信息安全、保护终端数据和个人数据隐私、保证合法合规的前提下,在多参与方或多计算结点之间开展高效率的机器学习。其中,联邦学习可使用的机器学习算法不局限于神经网络,还包括随机森林等重要算法。联邦学习有望成为下一代人工智能协同算法和协作网络的基础。

VIP内容

数据孤岛以及模型训练和应用过程中的隐私泄露是当下阻碍人工智能技术发展的主要难题。联邦学习作为一种高效的隐私保护手段应运而生。联邦学习是一种分布式的机器学习方法,以在不直接获取数据源的基础上,通过参与方的本地训练与参数传递,训练出一个无损的学习模型。但联邦学习中也存在较多的安全隐患。本文着重分析了联邦学习中的投毒攻击、对抗攻击以及隐私泄露三种主要的安全威胁,针对性地总结了最新的防御措施,并提出了相应的解决思路。

成为VIP会员查看完整内容
0
13

最新论文

Federated learning (FL) is a distributed learning protocol in which a server needs to aggregate a set of models learned some independent clients to proceed the learning process. At present, model averaging, known as FedAvg, is one of the most widely adapted aggregation techniques. However, it is known to yield the models with degraded prediction accuracy and slow convergence. In this work, we find out that averaging models from different clients significantly diminishes the norm of the update vectors, resulting in slow learning rate and low prediction accuracy. Therefore, we propose a new aggregation method called FedNNNN. Instead of simple model averaging, we adjust the norm of the update vector and introduce momentum control techniques to improve the aggregation effectiveness of FL. As a demonstration, we evaluate FedNNNN on multiple datasets and scenarios with different neural network models, and observe up to 5.4% accuracy improvement.

0
0
下载
预览
Top