As real-world images come in varying sizes, the machine learning model is part of a larger system that includes an upstream image scaling algorithm. In this paper, we investigate the interplay between vulnerabilities of the image scaling procedure and machine learning models in the decision-based black-box setting. We propose a novel sampling strategy to make a black-box attack exploit vulnerabilities in scaling algorithms, scaling defenses, and the final machine learning model in an end-to-end manner. Based on this scaling-aware attack, we reveal that most existing scaling defenses are ineffective under threat from downstream models. Moreover, we empirically observe that standard black-box attacks can significantly improve their performance by exploiting the vulnerable scaling procedure. We further demonstrate this problem on a commercial Image Analysis API with decision-based black-box attacks.
翻译:由于真实世界图像的大小不同,机器学习模型是包含上游图像缩放算法的更大系统的一部分。 在本文中,我们研究了图像缩放程序和基于决策的黑箱设置中的机器学习模型的脆弱性之间的相互作用。我们提出了一个新型的抽样战略,以便利用黑箱攻击在缩放算法、缩放防御和最终机器学习模型中的脆弱性,以端到端的方式进行。根据这种有意识的缩放攻击,我们发现大多数现有的缩放防御在下游模型的威胁下是无效的。此外,我们从经验上观察,标准的黑箱攻击通过利用脆弱的缩放程序可以大大改善它们的性能。我们进一步在商业图像分析API上用基于决策的黑箱攻击来展示这一问题。