Recent work on deep neural network pruning has shown there exist sparse subnetworks that achieve equal or improved accuracy, training time, and loss using fewer network parameters when compared to their dense counterparts. Orthogonal to pruning literature, deep neural networks are known to be susceptible to adversarial examples, which may pose risks in security- or safety-critical applications. Intuition suggests that there is an inherent trade-off between sparsity and robustness such that these characteristics could not co-exist. We perform an extensive empirical evaluation and analysis testing the Lottery Ticket Hypothesis with adversarial training and show this approach enables us to find sparse, robust neural networks. Code for reproducing experiments is available here: https://github.com/justincosentino/robust-sparse-networks.
翻译:最近关于深神经网络运行的工作表明,与密集的网络相比,在利用较少的网络参数实现平等或改进准确性、培训时间和损失方面,存在着稀少的子网络。在进行文献运行时,深神经网络已知容易出现对抗性例子,这可能给安全或安全关键应用带来风险。从中可以看出,在宽度和强度之间存在着内在的权衡,因此这些特征无法同时存在。我们用对抗性培训对彩票机票、波音理论进行了广泛的实证评估和分析,并展示了这种方法,使我们能够找到稀少、强大的神经网络。这里有复制实验守则:https://github.com/justinconsentino/robust-sparse-networks。