We develop a general non-asymptotic analysis of learning rates in kernel ridge regression (KRR), applicable for arbitrary Mercer kernels with multi-dimensional support. Our analysis is based on an operator-theoretic framework, at the core of which lies two error bounds under reproducing kernel Hilbert space norms encompassing a general class of kernels and regression functions, with remarkable extensibility to various inferential goals through augmenting results. When applied to KRR estimators, our analysis leads to error bounds under the stronger supremum norm, in addition to the commonly studied weighted $L_2$ norm; in a concrete example specialized to the Mat\'ern kernel, the established bounds recover the nearly minimax optimal rates. The wide applicability of our analysis is further demonstrated through two new theoretical results: (1) non-asymptotic learning rates for mixed partial derivatives of KRR estimators, and (2) a non-asymptotic characterization of the posterior variances of Gaussian processes, which corresponds to uncertainty quantification in kernel methods and nonparametric Bayes.
翻译:我们对内核脊下回归(KRR)的学习率进行一般的不设防分析,该分析适用于具有多维支持的任意Mercer内核。我们的分析以操作者理论框架为基础,其核心是复制内核Hilbert空间规范的两个错误界限,其中包括一个一般的内核和回归功能,通过增加结果显著地伸展到各种推断目标。在应用KRR估测器时,我们的分析导致在较强的顶部规范下出现错误界限,此外还有通常研究的加权2美元规范;在Mat\'ern内核的一个具体实例中,既定界限恢复了近乎小型最大最佳比率。我们分析的广泛适用性还表现在两个新的理论结果中:(1) KRR估测器混合部分衍生物的非无防疫学习率,以及(2)对戈斯过程的后端差异进行不设防患防碍鉴定,这相当于对基内核方法的不确定性和非对焦平准。