We consider training models with differential privacy (DP) using mini-batch gradients. The existing state-of-the-art, Differentially Private Stochastic Gradient Descent (DP-SGD), requires privacy amplification by sampling or shuffling to obtain the best privacy/accuracy/computation trade-offs. Unfortunately, the precise requirements on exact sampling and shuffling can be hard to obtain in important practical scenarios, particularly federated learning (FL). We design and analyze a DP variant of Follow-The-Regularized-Leader (DP-FTRL) that compares favorably (both theoretically and empirically) to amplified DP-SGD, while allowing for much more flexible data access patterns. DP-FTRL does not use any form of privacy amplification.
翻译:我们考虑使用微量梯度使用不同隐私的培训模式(DP),现有最先进的、有区别的私人软体渐变源(DP-SGD),要求通过取样或重新排列来扩大隐私,以获得最佳隐私/准确性/计算取舍;不幸的是,在重要的实际情况下,很难获得关于精确取样和调整的确切要求,特别是联邦学习(FL),我们设计和分析“跟踪-常规-牵头者(DP-FTRL)”的DP变种,这种变种(理论上和经验上)优于“扩大DP-SGD”,同时允许更灵活得多的数据访问模式。 DP-FTRL不使用任何形式的隐私扩增。