Approximate inference in probabilistic graphical models (PGMs) can be grouped into deterministic methods and Monte-Carlo-based methods. The former can often provide accurate and rapid inferences, but are typically associated with biases that are hard to quantify. The latter enjoy asymptotic consistency, but can suffer from high computational costs. In this paper we present a way of bridging the gap between deterministic and stochastic inference. Specifically, we suggest an efficient sequential Monte Carlo (SMC) algorithm for PGMs which can leverage the output from deterministic inference methods. While generally applicable, we show explicitly how this can be done with loopy belief propagation, expectation propagation, and Laplace approximations. The resulting algorithm can be viewed as a post-correction of the biases associated with these methods and, indeed, numerical results show clear improvements over the baseline deterministic methods as well as over "plain" SMC.
翻译:概率图形模型(PGMs)的近似推论可以归为确定方法和蒙特-卡洛法。前者通常可以提供准确和快速的推论,但通常与难以量化的偏差相关。后者享有无症状的一致性,但可能受到高计算成本的影响。在本文中,我们提出了一个缩小确定性和随机推论之间差距的方法。具体地说,我们建议对PGM采用高效的连续蒙特卡洛(SMC)算法,利用确定性推断法的输出。我们虽然一般适用,但明确表明如何通过循环信仰传播、预期传播和拉普尔近似来做到这一点。由此产生的算法可以被视为与这些方法相关的偏差的事后修正,而且实际上,数字结果表明基线确定性方法以及“广场”SMC都有明显改进。