Given a neural network, training data, and a threshold, it was known that it is NP-hard to find weights for the neural network such that the total error is below the threshold. We determine the algorithmic complexity of this fundamental problem precisely, by showing that it is ER-complete. This means that the problem is equivalent, up to polynomial-time reductions, to deciding whether a system of polynomial equations and inequalities with integer coefficients and real unknowns has a solution. If, as widely expected, ER is strictly larger than NP, our work implies that the problem of training neural networks is not even in NP.
翻译:考虑到神经网络、培训数据和临界值,人们知道很难为神经网络找到重量,以致总误差低于临界值。我们通过显示这一根本问题是ER-完整的来精确地确定这一根本问题的算法复杂性。这意味着问题相当于,最高是多元时间的减少,相当于决定一个多元方程式和不平等体系是否具有可数系数和真实未知数的解决方案。 如果按照广泛预期,ER绝对大于NP,那么我们的工作就意味着培训神经网络的问题甚至在NP中都不存在。