Deep neural networks provide unprecedented performance in all image classification problems, leveraging the availability of huge amounts of data for training. Recent studies, however, have shown their vulnerability to adversarial attacks, spawning an intense research effort in this field. With the aim of building better systems, new countermeasures and stronger attacks are proposed by the day. On the attacker's side, there is growing interest for the realistic black-box scenario, in which the user has no access to the neural network parameters. The problem is to design limited-complexity attacks which mislead the neural network without impairing image quality too much, not to raise the attention of human observers. In this work, we put special emphasis on this latter requirement and propose a powerful and low-complexity black-box attack which preserves perceptual image quality. Numerical experiments prove the effectiveness of the proposed techniques both for tasks commonly considered in this context, and for other applications in biometrics (face recognition) and forensics (camera model identification).
翻译:深度神经网络在所有图像分类问题上提供了前所未有的性能,利用大量数据可供培训使用。然而,最近的研究表明,它们容易受到对抗性攻击的影响,并在这一领域引发了密集的研究工作。为了建立更好的系统,每天都会提出新的反措施和更强烈的攻击。在攻击者方面,对现实的黑盒情景越来越感兴趣,用户无法接触神经网络参数。问题在于设计有限的复合性攻击,这种攻击在不过分损害图像质量的情况下误导神经网络,而不是引起人类观察员的注意。在这项工作中,我们特别强调后一种要求,并提出一个强大和低兼容性的黑盒攻击,以保持感知性图像质量。数字实验证明所提议的技术对这方面通常考虑的任务以及生物鉴别(面识别)和法医(摄影模型识别)的其他应用的有效性。