Knowledge distillation is effective for producing small high-performance neural networks for classification, but these small networks are vulnerable to adversarial attacks. We first study how robustness transfers from robust teacher to student network during knowledge distillation. We find that a large amount of robustness may be inherited by the student even when distilled on only clean images. Second, we introduce Adversarially Robust Distillation (ARD) for distilling robustness onto small student networks. ARD is an analogue of adversarial training but for distillation. In addition to producing small models with high test accuracy like conventional distillation, ARD also passes the superior robustness of large networks onto the student. In our experiments, we find that ARD student models decisively outperform adversarially trained networks of identical architecture on robust accuracy. Finally, we adapt recent fast adversarial training methods to ARD for accelerated robust distillation.
翻译:知识蒸馏对于产生小型高性能神经网络进行分类是有效的,但这些小型网络很容易受到对抗性攻击。我们首先研究在知识蒸馏过程中,强力教师如何从强力教师转移到学生网络。我们发现,即使只是从清洁图像中蒸馏,学生也可以继承大量强力力量。第二,我们采用逆向强力蒸馏法(ARD),以在小型学生网络中蒸馏强力力量。ARD是对抗性培训的类比,但用于蒸馏。除了产生像传统蒸馏法那样具有高测试精度的小模型外,ARD还把大型网络的超强力传递到学生身上。我们在实验中发现,ARD学生模型在稳健的精度上,肯定地超越了经过对抗性训练的相同结构网络。最后,我们调整了最近快速对抗性培训方法,以适应ARD,以加速强力蒸馏。