深度神经网络(DNN)是深度学习的一种框架,它是一种具备至少一个隐层的神经网络。与浅层神经网络类似,深度神经网络也能够为复杂非线性系统提供建模,但多出的层次为模型提供了更高的抽象层次,因而提高了模型的能力。

VIP内容

深度神经网络(DNNs)擅长于视觉识别任务,并日益被用作灵长类动物大脑神经计算的建模框架。就像单个大脑一样,每个DNN都有独特的连通性和表征特征。在这里,我们研究DNN实例之间的个体差异,产生只改变网络权值的随机初始化。使用通常用于系统神经科学的工具,我们表明,尽管相似的网络级别分类性能,但在训练前的初始条件的最小变化会导致中级和高级网络表征的实质性差异。我们定位效果的来源,在一个约束不足的类别样本对齐,而不是错位的类别中心点。这些结果对使用单个网络来获得神经信息处理洞察力的普遍做法提出了质疑,同时也暗示了从事DNNs研究的计算神经科学家可能需要将他们的推论建立在多个网络实例组的基础上。

成为VIP会员查看完整内容
0
9

最新内容

For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by integrating formal symbolic reasoning inside the monitor construction process. The algorithm performs a sound worst-case estimate of neuron values with inputs (or features) subject to perturbation, before the abstraction function is applied to build the monitor. The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit, implying that one can record activation patterns with a fine-grained decision on the neuron value interval.

0
0
下载
预览

最新论文

For deep neural networks (DNNs) to be used in safety-critical autonomous driving tasks, it is desirable to monitor in operation time if the input for the DNN is similar to the data used in DNN training. While recent results in monitoring DNN activation patterns provide a sound guarantee due to building an abstraction out of the training data set, reducing false positives due to slight input perturbation has been an issue towards successfully adapting the techniques. We address this challenge by integrating formal symbolic reasoning inside the monitor construction process. The algorithm performs a sound worst-case estimate of neuron values with inputs (or features) subject to perturbation, before the abstraction function is applied to build the monitor. The provable robustness is further generalized to cases where monitoring a single neuron can use more than one bit, implying that one can record activation patterns with a fine-grained decision on the neuron value interval.

0
0
下载
预览
Top