Adversarial training has been demonstrated to be the most effective approach to defend against adversarial attacks. However, existing adversarial training methods show apparent oscillations and overfitting issue in the training process, degrading the defense efficacy. In this work, we propose a novel framework, termed Parameter Interpolation based Adversarial Training (PIAT), that makes full use of the historical information during training. Specifically, at the end of each epoch, PIAT tunes the model parameters as the interpolation of the parameters of the previous and current epochs. Besides, we suggest to use the Normalized Mean Square Error (NMSE) to further improve the robustness by aligning the clean and adversarial examples. Compared with other regularization methods, NMSE focuses more on the relative magnitude of the logits rather than the absolute magnitude. Extensive experiments on several benchmark datasets and various networks show that our method could prominently improve the model robustness and reduce the generalization error. Moreover, our framework is general and could further boost the robust accuracy when combined with other adversarial training methods.
翻译:对抗训练被证明是最有效的防御对抗攻击的方法。然而,现有的对抗训练方法在训练过程中显示出明显的振荡和过拟合问题,降低了防御效果。在这项工作中,我们提出了一种新的框架,称为基于参数插值的对抗训练 (PIAT),充分利用了训练过程中的历史信息。具体而言,在每个时代的结束时,PIAT调整模型参数为先前和当前时代的参数的插值。此外,我们建议使用归一化均方差误差(NMSE),通过对齐清洁和对抗示例进一步提高鲁棒性。与其他正则化方法相比,NMSE更关注逻辑的相对大小而非绝对大小。在几个基准数据集和各种网络上进行了广泛的实验,结果显示我们的方法能显著提高模型的鲁棒性并减少泛化误差。此外,我们的框架是通用的,可以与其他对抗训练方法相结合进一步提高鲁棒准确性。