Markov Logic Networks (MLNs) define a probability distribution on relational structures over varying domain sizes. Many works have noticed that MLNs, like many other relational models, do not admit consistent marginal inference over varying domain sizes. Furthermore, MLNs learnt on a certain domain do not generalize to new domains of varied sizes. In recent works, connections have emerged between domain size dependence, lifted inference and learning from sub-sampled domains. The central idea to these works is the notion of projectivity. The probability distributions ascribed by projective models render the marginal probabilities of sub-structures independent of the domain cardinality. Hence, projective models admit efficient marginal inference, removing any dependence on the domain size. Furthermore, projective models potentially allow efficient and consistent parameter learning from sub-sampled domains. In this paper, we characterize the necessary and sufficient conditions for a two-variable MLN to be projective. We then isolate a special model in this class of MLNs, namely Relational Block Model (RBM). We show that, in terms of data likelihood maximization, RBM is the best possible projective MLN in the two-variable fragment. Finally, we show that RBMs also admit consistent parameter learning over sub-sampled domains.
翻译:Markov Logic Networks(MLNs) 定义了不同领域大小关系结构的概率分布。 许多作品注意到, MLNs与其他许多关系模型一样,并不接受不同领域大小的一致边际推推推。 此外, 在某一领域学习的 MLN 并不普遍化为不同大小的新领域。 在最近的作品中, 域大小依赖性、 提高推论和从子抽样域学习之间出现了联系。 这些作品的核心思想是投影性概念。 投影模型所描述的概率分布使得子结构的边际概率独立于域基点。 因此, 投影模型接受有效的边际推论, 消除对域大小的任何依赖性。 此外, 投影模型可能允许从次抽样域学习有效和一致的参数。 在本文中, 我们描述两个可变的 MLN 投影性域中的一种特殊模型, 即Relational block 模型(RBMMM) 。 我们在最后的域中显示, 最有可能的RBDM 的分级模型, 我们的分数也一致地显示, 我们的分级的分级, 我们的分级的分级, 我们的分级的分级的分级数据。