Gaussian processes (GPs) provide a powerful non-parametric framework for reasoning over functions. Despite appealing theory, its superlinear computational and memory complexities have presented a long-standing challenge. State-of-the-art sparse variational inference methods trade modeling accuracy against complexity. However, the complexities of these methods still scale superlinearly in the number of basis functions, implying that that sparse GP methods are able to learn from large datasets only when a small model is used. Recently, a decoupled approach was proposed that removes the unnecessary coupling between the complexities of modeling the mean and the covariance functions of a GP. It achieves a linear complexity in the number of mean parameters, so an expressive posterior mean function can be modeled. While promising, this approach suffers from optimization difficulties due to ill-conditioning and non-convexity. In this work, we propose an alternative decoupled parametrization. It adopts an orthogonal basis in the mean function to model the residues that cannot be learned by the standard coupled approach. Therefore, our method extends, rather than replaces, the coupled approach to achieve strictly better performance. This construction admits a straightforward natural gradient update rule, so the structure of the information manifold that is lost during decoupling can be leveraged to speed up learning. Empirically, our algorithm demonstrates significantly faster convergence in multiple experiments.
翻译:高斯进程( GPs) 提供了一个强大的非参数框架, 用于推理功能。 尽管有吸引力的理论, 它的超级线性计算和记忆复杂性带来了长期的挑战。 最先进的、 稀有的变异推论方法, 相对于复杂性, 交换模型的精确性。 然而, 这些方法的复杂性仍然在基础功能数量上具有超线性, 意味着稀有的GP方法只有在使用小模型时才能从大数据集中学习。 最近, 提出了一种分解方法, 消除GP的平均值和共变功能的复杂性。 它在平均参数数量上实现了线性复杂性, 所以可以建模一个直观的后向值。 虽然这一方法很有希望, 但是由于基础功能的大小, 它会因不整齐不整和不整齐不整形而难以优化。 在这项工作中, 我们提出的另一种解析式的匹配性功能是模拟无法被标准组合方法所学到的残留物。 因此, 我们的方法将直线性复杂, 而不是直截面的递式的递进式的递增度, 。