Standard likelihood penalties to learn Gaussian graphical models are based on regularising the off-diagonal entries of the precision matrix. Such methods, and their Bayesian counterparts, are not invariant to scalar multiplication of the variables, unless one standardises the observed data to unit sample variances. We show that such standardisation can have a strong effect on inference and introduce a new family of penalties based on partial correlations. We show that the latter, as well as the maximum likelihood, $L_0$ and logarithmic penalties are scale invariant. We illustrate the use of one such penalty, the partial correlation graphical LASSO, which sets an $L_{1}$ penalty on partial correlations. The associated optimization problem is no longer convex, but is conditionally convex. We show via simulated examples and in two real datasets that, besides being scale invariant, there can be important gains in terms of inference.
翻译:学习 Gaussian 图形模型的标准概率处罚基于对精密矩阵的离对角条目的常规化。 这种方法及其巴伊西亚对应方不会变换变量的缩放倍增, 除非一种方法将观察到的数据标准化为单位样本差异。 我们显示, 这种标准化可以对推论产生强烈影响, 并引入基于部分相关性的新的惩罚组合。 我们显示, 后者以及最大可能性, 即 $_ 0 和对数处罚是差异性的。 我们举例说明了使用一种此类处罚, 即部分对应图形 LASSO, 其设定部分相关性的 $L_ 1 } 罚款。 相关的优化问题不再是二次曲线, 而是有条件的连接。 我们通过模拟实例和两个真实的数据集显示, 除了规模的变量外, 在推论方面还有重要的好处 。