In this paper we cast neural networks defined on graphs as message-passing neural networks (MPNNs) in order to study the distinguishing power of different classes of such models. We are interested in whether certain architectures are able to tell vertices apart based on the feature labels given as input with the graph. We consider two variants of MPNNS: anonymous MPNNs whose message functions depend only on the labels of vertices involved; and degree-aware MPNNs in which message functions can additionally use information regarding the degree of vertices. The former class covers a popular formalisms for computing functions on graphs: graph neural networks (GNN). The latter covers the so-called graph convolutional networks (GCNs), a recently introduced variant of GNNs by Kipf and Welling. We obtain lower and upper bounds on the distinguishing power of MPNNs in terms of the distinguishing power of the Weisfeiler-Lehman (WL) algorithm. Our results imply that (i) the distinguishing power of GCNs is bounded by the WL algorithm, but that they are one step ahead; (ii) the WL algorithm cannot be simulated by "plain vanilla" GCNs but the addition of a trade-off parameter between features of the vertex and those of its neighbours (as proposed by Kipf and Welling themselves) resolves this problem.
翻译:在本文中,我们将图表上定义为信息传递神经网络的神经网络(MPNNS)定义为图形上的神经网络(MPNNS),以便研究不同类型模型的区别力量。我们对某些建筑能否根据作为图中输入的特征标签(GCNs)来分辨脊椎。我们考虑了MPNNS的两个变式:匿名MPNNS,其信息功能仅取决于所涉vertics标签的匿名MPNNS;和有感知度的MPNNNS,其中信息功能可以进一步使用有关vertics程度的信息。前一类包括图表中计算功能的流行形式主义:图形神经网络(GNNNNN);我们对某些建筑结构是否能够根据作为图中输入输入的特性的特性来分辨脊椎网络(GCNs);我们考虑了MPNNNS的两个变异种,其信息功能仅取决于所涉vertics的标签;和有意识的MPNNNN(WL)算法(i)中的信息功能可以额外使用。我们的结果表明,(i)GCNs的区别力量受WCNs的固定形式约束,但是它们本身不能用WLex kevalex as ex exalgalations;它们本身的模拟GVALxalbisalxism dislations is a a step.