Training deep neural networks (DNNs) for meaningful differential privacy (DP) guarantees severely degrades model utility. In this paper, we demonstrate that the architecture of DNNs has a significant impact on model utility in the context of private deep learning, whereas its effect is largely unexplored in previous studies. In light of this missing, we propose the very first framework that employs neural architecture search to automatic model design for private deep learning, dubbed as DPNAS. To integrate private learning with architecture search, we delicately design a novel search space and propose a DP-aware method for training candidate models. We empirically certify the effectiveness of the proposed framework. The searched model DPNASNet achieves state-of-the-art privacy/utility trade-offs, e.g., for the privacy budget of $(\epsilon, \delta)=(3, 1\times10^{-5})$, our model obtains test accuracy of $98.57\%$ on MNIST, $88.09\%$ on FashionMNIST, and $68.33\%$ on CIFAR-10. Furthermore, by studying the generated architectures, we provide several intriguing findings of designing private-learning-friendly DNNs, which can shed new light on model design for deep learning with differential privacy.
翻译:在本文中,我们证明DNN的架构对私人深层学习的示范用途产生了重大影响,而其效果在以往的研究中基本上未探索。鉴于这一缺失,我们提议第一个框架,利用神经结构搜索为私人深层学习的自动模型设计进行自动模型设计,称为DPNAS。为了将私人学习与建筑搜索相结合,我们精细地设计了一个新的搜索空间,并提出了培训候选模型的DP-aware方法。我们从经验上证明拟议框架的有效性。搜索的DPNASNet模型实现了最新、最先进的隐私/通用交易,例如,对于$(epsilon,\delta)=(3,1\times10 ⁇ -5}的隐私预算,我们的模式获得了98.57美元对MNIST的测试准确度,88.09美元对FashionMNIST的测试,68.3.3美元对CIFAR-10的测试。此外,通过研究所生成的隐私的私人研究,可以提供若干项深层的深层学习的模型。