The main quantities of interest in Bayesian inference are arguably the first two moments of the posterior distribution. In the past decades, variational inference (VI) has emerged as a tractable approach to approximate these summary statistics, and a viable alternative to the more established paradigm of Markov Chain Monte Carlo. However, little is known about the approximation accuracy of VI. In this work, we bound the mean and covariance approximation error of Gaussian VI in terms of dimension and sample size. Our results indicate that Gaussian VI outperforms significantly the classical Gaussian approximation obtained from the ubiquitous Laplace method. Our error analysis relies on a Hermite series expansion of the log posterior whose first terms are precisely cancelled out by the first order optimality conditions associated to the Gaussian VI optimization problem.
翻译:贝叶西亚推论中的主要利益量可以说是后方分布的前两个时刻。在过去几十年中,变式推论(VI)已经出现,成为接近这些简要统计的可移动方法,是Markov链条蒙特卡洛这一较为固定的范式的可行替代物。然而,对六的近似准确性知之甚少。在这项工作中,我们从尺寸和样本大小上将高萨六的平均值和共差近似误差捆绑在一起。我们的结果表明,高萨六国的平均值大大超过从无处不在的拉普特方法中获得的古典高斯近近似值。我们的错误分析依赖于日志远地点的Hermite系列扩展,其第一个条件被与高斯六国优化问题相关的第一顺序最佳性条件精确地取消了。