Radial basis function neural networks (\emph{RBFNN}) are {well-known} for their capability to approximate any continuous function on a closed bounded set with arbitrary precision given enough hidden neurons. In this paper, we introduce the first algorithm to construct coresets for \emph{RBFNNs}, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network and thus approximate any function defined by an \emph{RBFNN} on the larger input data. In particular, we construct coresets for radial basis and Laplacian loss functions. We then use our coresets to obtain a provable data subset selection algorithm for training deep neural networks. Since our coresets approximate every function, they also approximate the gradient of each weight in a neural network, which is a particular function on the input. We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets, demonstrating the efficacy and accuracy of our coreset construction.
翻译:辐射基函数神经网络 (\ emph{ RBFNN}) 是 众所周知的, 因为它们有能力在封闭封闭的封闭的封闭的封闭的封闭的框中以任意的精确度匹配任何连续的功能。 在本文中, 我们引入了第一个算法, 用于为\ emph{ RBFNN} 构建核心元件, 即小的加权子集, 以近似于任何辐射基函数网络输入数据丢失的情况, 从而接近于较大输入数据中由 emph{ RBFN} 定义的任何函数。 特别是, 我们为辐射基数和 Laplacian 损失函数构建核心元件。 我们随后使用核心元来获取一个可变数据子选择算法, 用于培训深神经网络。 由于我们的核心元大约每个函数, 它们也近似神经网络中每个重量的梯度, 这是输入的一个特定函数 。 然后我们对功能的近近和数据集子集选择进行实证评估, 展示我们核心构造的功效和准确性 。</s>