While modern machine learning models rely on increasingly large training datasets, data is often limited in privacy-sensitive domains. Generative models trained with differential privacy (DP) on sensitive data can sidestep this challenge, providing access to synthetic data instead. However, training DP generative models is highly challenging due to the noise injected into training to enforce DP. We propose to leverage diffusion models (DMs), an emerging class of deep generative models, and introduce Differentially Private Diffusion Models (DPDMs), which enforce privacy using differentially private stochastic gradient descent (DP-SGD). We motivate why DP-SGD is well suited for training DPDMs, and thoroughly investigate the DM parameterization and the sampling algorithm, which turn out to be crucial ingredients in DPDMs. Furthermore, we propose noise multiplicity, a simple yet powerful modification of the DM training objective tailored to the DP setting to boost performance. We validate our novel DPDMs on widely-used image generation benchmarks and achieve state-of-the-art (SOTA) performance by large margins. For example, on MNIST we improve the SOTA FID from 48.4 to 5.01 and downstream classification accuracy from 83.2% to 98.1% for the privacy setting DP-$(\varepsilon{=}10, \delta{=}10^{-5})$. Moreover, on standard benchmarks, classifiers trained on DPDM-generated synthetic data perform on par with task-specific DP-SGD-trained classifiers, which has not been demonstrated before for DP generative models. Project page and code: https://nv-tlabs.github.io/DPDM.
翻译:虽然现代机器学习模式依赖日益庞大的培训数据集,但数据往往在隐私敏感领域有限。在敏感数据方面经过不同隐私(DP)培训的生成模型可以回避这一挑战,而提供合成数据的渠道。然而,由于在培训中注入了执行DP的噪音,培训DP的突变模型非常具有挑战性。我们提议利用推广模型(DMs)这一新兴的深层基因化模型类别,并采用差异化的私人投影模型(DPDMs),这些模型使用不同程度的私人合成梯度下降(DP-SGD)强制实施隐私。我们提出为什么DP-SGD非常适合培训DPDMs,并彻底调查DM的参数化和取样算法,这在DPDMs中成为关键要素。此外,我们提议采用噪音多重,即对DMS培训目标进行简单而有力的修改,以提升业绩。我们用广泛使用的图像生成基准来验证我们的新DPDDMDMDMs,并以大幅度实现Sate-art(SOTA)级标准(SOIT),即SOTATADDDDDDDD(48.4-DDMDDDDDDDDDDDDDDDDDDDDD)之前,从98%10-110-1级到下的数据级标准,从98.01%至下显示为标准。