With the increasing application of deep learning in mission-critical systems, there is a growing need to obtain formal guarantees about the behaviors of neural networks. Indeed, many approaches for verifying neural networks have been recently proposed, but these generally struggle with limited scalability or insufficient accuracy. A key component in many state-of-the-art verification schemes is computing lower and upper bounds on the values that neurons in the network can obtain for a specific input domain -- and the tighter these bounds, the more likely the verification is to succeed. Many common algorithms for computing these bounds are variations of the symbolic-bound propagation method; and among these, approaches that utilize a process called back-substitution are particularly successful. In this paper, we present an approach for making back-substitution produce tighter bounds. To achieve this, we formulate and then minimize the imprecision errors incurred during back-substitution. Our technique is general, in the sense that it can be integrated into numerous existing symbolic-bound propagation techniques, with only minor modifications. We implement our approach as a proof-of-concept tool, and present favorable results compared to state-of-the-art verifiers that perform back-substitution.
翻译:随着在任务关键系统中越来越多地应用深层学习,越来越需要获得关于神经网络行为的正式保障。事实上,最近提出了许多核查神经网络的方法,但这些方法通常在伸缩性有限或准确性不足的情况下挣扎。许多最先进的核查方案的一个关键组成部分是计算网络神经元在特定输入领域可以获得的值的下限和上限,而更严格的这些界限,核查就越有可能取得成功。计算这些界限的许多通用算法是带有符号的传播方法的变异;其中,利用称为后置替代的过程的方法特别成功。我们在本文件中提出了使后置替代产生更严格界限的方法。为了实现这一目标,我们制定并随后尽量减少后置替代期间发生的不精确错误。我们的技术是普遍的,即它可以融入现有的许多带有符号的传播技术,只有少量的修改。我们采用的方法作为验证的替代工具,并且目前与州制的后置系统相比是有利的结果。