We present a new approach to using neural networks to approximate the solutions of variational equations, based on the adaptive construction of a sequence of finite-dimensional subspaces whose basis functions are realizations of a sequence of neural networks. The finite-dimensional subspaces are then used to define a standard Galerkin approximation of the variational equation. This approach enjoys a number of advantages, including: the sequential nature of the algorithm offers a systematic approach to enhancing the accuracy of a given approximation; the sequential enhancements provide a useful indicator for the error that can be used as a criterion for terminating the sequential updates; the basic approach is largely oblivious to the nature of the partial differential equation under consideration; and, some basic theoretical results are presented regarding the convergence (or otherwise) of the method which are used to formulate basic guidelines for applying the method.
翻译:我们提出了一个新办法,利用神经网络来近似变式方程的解决办法,其基础功能是实现神经网络序列的功能的有限维次空间序列的适应性构造为基础。然后,使用有限维次空间来界定变式方程的标准Galerkin近似值。这种方法具有若干优点,包括:算法的顺序性质为提高给定近似的准确性提供了系统的方法;顺序增强为错误提供了有用的指标,可用来作为终止连续更新的标准;基本方法基本上忽视所考虑的部分差异方程的性质;以及就用来制定方法应用基本准则的方法的趋同(或其他方法)提出了一些基本理论结果。