Convolutional neural networks have been successful lately enabling companies to develop neural-based products, which demand an expensive process, involving data acquisition and annotation; and model generation, usually requiring experts. With all these costs, companies are concerned about the security of their models against copies and deliver them as black-boxes accessed by APIs. Nonetheless, we argue that even black-box models still have some vulnerabilities. In a preliminary work, we presented a simple, yet powerful, method to copy black-box models by querying them with natural random images. In this work, we consolidate and extend the copycat method: (i) some constraints are waived; (ii) an extensive evaluation with several problems is performed; (iii) models are copied between different architectures; and, (iv) a deeper analysis is performed by looking at the copycat behavior. Results show that natural random images are effective to generate copycats for several problems.
翻译:革命性神经网络最近取得了成功,使公司能够开发神经产品,这需要昂贵的过程,涉及数据获取和批注;模型生成,通常需要专家。在所有这些成本下,公司担心其模型相对于复制件的安全性,并以黑盒黑盒形式交付这些模型,尽管如此,我们认为即使是黑盒模型也有一些弱点。在一项初步工作中,我们提出了一个简单而有力的方法,通过询问自然随机图像来复制黑盒模型。在这项工作中,我们合并并推广了复制器方法:(一) 免除了一些限制;(二) 进行了涉及一些问题的广泛评价;(三) 模型在不同结构之间复制;以及(四) 通过查看复制器行为进行更深入的分析。结果显示,自然随机图像对一些问题产生复制器是有效的。