In the desire to quantify the success of neural networks in deep learning and other applications, there is a great interest in understanding which functions are efficiently approximated by the outputs of neural networks. By now, there exists a variety of results which show that a wide range of functions can be approximated with sometimes surprising accuracy by these outputs. For example, it is known that the set of functions that can be approximated with exponential accuracy (in terms of the number of parameters used) includes, on one hand, very smooth functions such as polynomials and analytic functions (see e.g. \cite{E,S,Y}) and, on the other hand, very rough functions such as the Weierstrass function (see e.g. \cite{EPGB,DDFHP}), which is nowhere differentiable. In this paper, we add to the latter class of rough functions by showing that it also includes refinable functions. Namely, we show that refinable functions are approximated by the outputs of deep ReLU networks with a fixed width and increasing depth with accuracy exponential in terms of their number of parameters. Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design.
翻译:为了量化神经网络在深层学习和其他应用中的成功,人们非常希望了解哪些功能被神经网络的产出有效近似于哪些功能。现在,有各种各样的结果显示,这些产出可以近似于多种功能,有时甚至令人惊讶的准确性。例如,众所周知,可以近似指数精确度的一组功能(按使用的参数数计算)一方面包括非常顺利的功能,如多式和分析性功能(例如,见\cite{E,S,Y}),另一方面,如Weierstrass函数(例如,见\cite{EPGB,DDFHP})等非常粗糙的功能,这些功能是没有区别的。在本文件中,我们加上后一类粗糙功能,表明它也包括可修复的功能。我们表明,可修复功能的根据深ReLU网络的产出(例如,见\cite{E,S,Y}),而另一方面,Weierstras 函数(例如,见\cite{EPGB,DFDFHP}) 等非常粗糙的功能(例如,见),这些功能,这些功能,这些功能的精确性指数在构建的精确度中,在构建的参数中应用中,我们将其应用和设计中,在设计中,在设计中,在设计中,在设计中,其设计中,应用为数字的精确的精确度函数数中应用到用于。