The Lipschitz constant of neural networks has been established as a key quantity to enforce the robustness to adversarial examples. In this paper, we tackle the problem of building $1$-Lipschitz Neural Networks. By studying Residual Networks from a continuous time dynamical system perspective, we provide a generic method to build $1$-Lipschitz Neural Networks and show that some previous approaches are special cases of this framework. Then, we extend this reasoning and show that ResNet flows derived from convex potentials define $1$-Lipschitz transformations, that lead us to define the {\em Convex Potential Layer} (CPL). A comprehensive set of experiments on several datasets demonstrates the scalability of our architecture and the benefits as an $\ell_2$-provable defense against adversarial examples.
翻译:Lipschitz 神经网络常数已被确定为对对抗性实例实施强力的关键数量。 在本文中,我们解决了建设$$-Lipschitz神经网络的问题。通过从持续的时间动态系统角度研究残余网络,我们提供了一种通用方法来建设$-Lipschitz神经网络,并表明以前的一些方法是这一框架的特殊情况。然后,我们扩展了这一推理,并表明来自 convex 潜力的ResNet流确定了1美元-Lipschitz 的变换,从而导致我们定义了 $ $ $- Lipschitz 潜在层 (CPL ) 。 一系列关于若干数据集的全面实验显示了我们结构的可扩展性,以及作为用于对抗对抗对抗敌对性实例的可促进性防御的$\ell_2美元的好处。