Thanks to their easy implementation via Radial Basis Functions (RBFs), meshfree kernel methods have been proved to be an effective tool for e.g. scattered data interpolation, PDE collocation, classification and regression tasks. Their accuracy might depend on a length scale hyperparameter, which is often tuned via cross validation schemes. Here we leverage approaches and tools from the machine learning community to introduce two-layered kernel machines, which generalize the classical RBF approaches that rely on a single hyperparameter. Indeed, the proposed learning strategy returns a kernel that is optimized not only in the Euclidean directions, but that further incorporates kernel rotations. The kernel optimization is shown to be robust by using recently improved calculations of cross validation scores. Finally, the use of greedy approaches, and specifically of the Vectorial Kernel Orthogonal Greedy Algorithm (VKOGA), allows us to construct an optimized basis that adapts to the data. Beyond a rigorous analysis on the convergence of the so-constructed two-Layered (2L)-VKOGA, its benefits are highlighted on both synthesized and real benchmark data sets.
翻译:由于通过辐射基础函数(RBF)的简单实施,网状内核方法已被证明是分散的数据内插、PDE合置、分类和回归任务等的有效工具。它们的准确性可能取决于一个长尺度的超参数,通常通过交叉验证办法加以调整。在这里,我们利用机器学习界的方法和工具引入两层内核机器,这些方法和工具将传统的RBF方法笼统地归纳成依赖单一超参数的经典RBF方法。事实上,拟议的学习战略返回一个不仅在Euclidean方向优化,而且进一步纳入内核旋转的内核。最近改进的交叉验证分数的计算表明内核优化是稳健的。最后,使用贪婪的方法,特别是VKOGA(VGA)系统,使我们能够构建一个最优化的基础,适应数据。除了严格地分析两层内层(2L)-VKOGA数据库的趋同点外,还突出地显示其效益。