For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by integrating formal symbolic reasoning inside the monitor construction process. The algorithm performs a sound worst-case estimate of neuron values with inputs (or features) subject to perturbation, before the abstraction function is applied to build the monitor. The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit, implying that one can record activation patterns with a fine-grained decision on the neuron value interval.
翻译:对于用于安全关键自主驾驶任务的深神经网络(DNN)而言,如果DNN的输入与DNN培训使用的数据相似,那么最好在运行时监测。虽然监测DNN激活模式的最近结果提供了可靠的保证,因为从培训数据集中抽取一个数据,减少由于输入轻微的干扰造成的假阳性是成功调整技术的一个问题。我们通过将正式的象征性推理纳入监视器建设过程来应对这一挑战。算法在将输入(或特性)的输入(或特性)用于扰动的神经值进行一个最坏的预测,然后将抽象功能用于建立监测器。可证实的稳健性进一步推广到监测单个神经神经元可使用超过一位的情况,这意味着可以记录激活模式,对神经值间隔作出精细的决定。