In the context of DP-SGD each round communicates a local SGD update which leaks some new information about the underlying local data set to the outside world. In order to provide privacy, Gaussian noise is added to local SGD updates. However, privacy leakage still aggregates over multiple training rounds. Therefore, in order to control privacy leakage over an increasing number of training rounds, we need to increase the added Gaussian noise per local SGD update. This dependence of the amount of Gaussian noise $\sigma$ on the number of training rounds $T$ may impose an impractical upper bound on $T$ (because $\sigma$ cannot be too large) leading to a low accuracy global model (because the global model receives too few local SGD updates). This makes DP-SGD much less competitive compared to other existing privacy techniques. We show for the first time that for $(\epsilon,\delta)$-differential privacy $\sigma$ can be chosen equal to $\sqrt{2(\epsilon +\ln(1/\delta))/\epsilon}$ for $\epsilon=\Omega(T/N^2)$. In many existing machine learning problems, $N$ is always large and $T=O(N)$. Hence, $\sigma$ becomes ``independent'' of any $T=O(N)$ choice with $\epsilon=\Omega(1/N)$ (aggregation of privacy leakage increases to a limit). This means that our $\sigma$ only depends on $N$ rather than $T$. This important discovery brings DP-SGD to practice -- as also demonstrated by experiments -- because $\sigma$ can remain small to make the trained model have high accuracy even for large $T$ as usually happens in practice.


翻译:在DP-SGD每回合中,每回合都会传达一个本地的 SGD 更新, 将一些有关本地基本数据集的新信息透露给外部世界。 为了提供隐私, 本地的 SGD 更新会增加高斯噪音。 但是, 隐私泄漏仍然会聚集在多个培训回合中。 因此, 为了控制越来越多的培训回合中的隐私泄漏, 我们需要在本地的 SGD 更新中增加增加高斯噪音。 高斯的噪音在培训回合数量上会增加一些有关本地基本数据集的新信息。 高斯的频率在2 (T$) (T$) 可能会给美元造成不切实际的上限( 因为美元不会太大) 导致一个低精度的全球模型( 因为全球模型得到的本地SGD更新太少了。 这使得DP-SGD 与其他现有的隐私技术相比, 我们第一次显示$(\ silon,\delta) $- squmy $( $ gmag$) 只能被选择为 $ (xqrqral) = = $( $_ $_ $_ $_ $_ $_xxxxxxxxxx_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

0
下载
关闭预览

相关内容

强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
已删除
AI科技评论
4+阅读 · 2018年8月12日
Privacy-Preserving News Recommendation Model Learning
VIP会员
相关VIP内容
强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
相关资讯
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
已删除
AI科技评论
4+阅读 · 2018年8月12日
Top
微信扫码咨询专知VIP会员