Deep learning model developers often use cloud GPU resources to experiment with large data and models that need expensive setups. However, this practice raises privacy concerns. Adversaries may be interested in: 1) personally identifiable information or objects encoded in the training images, and 2) the models trained with sensitive data to launch model-based attacks. Learning deep neural networks (DNN) from encrypted data is still impractical due to the large training data and the expensive learning process. A few recent studies have tried to provide efficient, practical solutions to protect data privacy in outsourced deep-learning. However, we find out that they are vulnerable under certain attacks. In this paper, we specifically identify two types of unique attacks on outsourced deep-learning: 1) the visual re-identification attack on the training data, and 2) the class membership attack on the learned models, which can break existing privacy-preserving solutions. We develop an image disguising approach to address these attacks and design a suite of methods to evaluate the levels of attack resilience for a privacy-preserving solution for outsourced deep learning. The experimental results show that our image-disguising mechanisms can provide a high level of protection against the two attacks while still generating high-quality DNN models for image classification.
翻译:深学习模型开发者往往使用云式GPU资源来试验需要昂贵设置的大型数据和模型。然而,这种做法引起了隐私方面的关注。反论者可能感兴趣的有:(1) 个人识别的信息或培训图像中编码的物体;(2) 受过敏感数据培训的模型,以启动模型式袭击;(2) 从加密数据中学习深神经网络(DNN)仍然不切实际,因为有大量的培训数据和昂贵的学习过程。最近的一些研究试图在外包深度学习中为保护数据隐私提供有效、实用的解决方案。然而,我们发现他们在某些攻击中很脆弱。我们在本文件中具体确定了对外包深造的两种独特的攻击:1) 对培训数据进行视觉再定位攻击,2) 对学习模型进行班级成员攻击,这可以打破现有的隐私保留解决方案。我们开发了一种模糊图像的方法来应对这些攻击,并设计一套方法来评估攻击的复原力水平,以便为外包深度学习提供隐私保护解决方案。实验结果表明,我们的图像干扰机制可以提供高水平的保护,防止两种攻击,同时仍然生成高品质的DNN的图像分类模型。