Deep neural network based image compression has been extensively studied. Model robustness is largely overlooked, though it is crucial to service enabling. We perform the adversarial attack by injecting a small amount of noise perturbation to original source images, and then encode these adversarial examples using prevailing learnt image compression models. Experiments report severe distortion in the reconstruction of adversarial examples, revealing the general vulnerability of existing methods, regardless of the settings used in underlying compression model (e.g., network architecture, loss function, quality scale) and optimization strategy used for injecting perturbation (e.g., noise threshold, signal distance measurement). Later, we apply the iterative adversarial finetuning to refine pretrained models. In each iteration, random source images and adversarial examples are mixed to update underlying model. Results show the effectiveness of the proposed finetuning strategy by substantially improving the compression model robustness. Overall, our methodology is simple, effective, and generalizable, making it attractive for developing robust learnt image compression solution. All materials have been made publicly accessible at https://njuvision.github.io/RobustNIC for reproducible research.
翻译:深神经网络图像压缩已经广泛研究。 模型稳健性在很大程度上被忽略了,尽管对于提供辅助服务至关重要。 我们通过向原始源图像注入少量噪音扰动来进行对抗性攻击,然后用普遍学习的图像压缩模型编码这些对抗性实例。 实验报告在重建对抗性实例时严重扭曲,揭示现有方法的一般脆弱性,而不论在基本压缩模型(例如网络结构、损失功能、质量尺度)和注射性扰动(例如噪音阈值、信号距离测量)中使用的优化战略中使用的设置。 之后,我们用迭代对抗性对预先培训模型进行微调。 在每种版本中,随机源图像和对抗性实例混合起来更新基本模型。结果显示拟议微调战略的有效性,大大改进压缩模型的稳健性。 总体而言,我们的方法简单、有效且可概括,因此对开发稳健的、有学过图像压缩解决方案很有吸引力。 所有材料都可在https://juvision.github.io/Robustnic 进行公开查阅。