In this paper, we investigate on improving the adversarial robustness obtained in adversarial training (AT) via reducing the difficulty of optimization. To better study this problem, we build a novel Bregman divergence perspective for AT, in which AT can be viewed as the sliding process of the training data points on the negative entropy curve. Based on this perspective, we analyze the learning objectives of two typical AT methods, i.e., PGD-AT and TRADES, and we find that the optimization process of TRADES is easier than PGD-AT for that TRADES separates PGD-AT. In addition, we discuss the function of entropy in TRADES, and we find that models with high entropy can be better robustness learners. Inspired by the above findings, we propose two methods, i.e., FAIT and MER, which can both not only reduce the difficulty of optimization under the 10-step PGD adversaries, but also provide better robustness. Our work suggests that reducing the difficulty of optimization under the 10-step PGD adversaries is a promising approach for enhancing the adversarial robustness in AT.
翻译:在本文中,我们研究如何通过减少优化难度来改善在对抗性培训中获得的对抗性强力。为了更好地研究这一问题,我们为AT构建了一个全新的布雷格曼差异观点,在这个观点中,可以将AT视为负对流曲线上培训数据点的滑动过程。基于这一观点,我们分析了两种典型的AT方法的学习目标,即PGD-AT和贸易,我们发现贸易体系的优化进程比PGD-AT容易,因为贸易体系分离了PGD-AT。此外,我们讨论了TradeS的诱导功能,我们发现具有高对流率的模型可以成为更强健的学习者。根据上述结论,我们提出了两种方法,即FAIT和MER方法,它们不仅可以减少PGD10步对手的优化难度,而且还可以提供更强健性。我们的工作表明,减少10步PGD对手的优化难度是加强AT中抗争能力的一个很有希望的方法。