The core operation of current Graph Neural Networks (GNNs) is the aggregation enabled by the graph Laplacian or message passing, which filters the neighborhood information of nodes. Though effective for various tasks, in this paper, we show that they are potentially a problematic factor underlying all GNN models for learning on certain datasets, as they force the node representations similar, making the nodes gradually lose their identity and become indistinguishable. Hence, we augment the aggregation operations with their dual, i.e. diversification operators that make the node more distinct and preserve the identity. Such augmentation replaces the aggregation with a two-channel filtering process that, in theory, is beneficial for enriching the node representations. In practice, the proposed two-channel filters can be easily patched on existing GNN methods with diverse training strategies, including spectral and spatial (message passing) methods. In the experiments, we observe desired characteristics of the models and significant performance boost upon the baselines on 9 node classification tasks.
翻译:当前图形神经网络的核心功能是图解 Laplacian 或电文传递所促成的聚合,它过滤了节点的周边信息。我们在本文件中表明,尽管它们对于各种任务有效,但对于学习某些数据集的所有GNN模式而言,它们都具有潜在的问题因素,因为它们迫使节点表示相似,使节点逐渐失去其特性,变得无法区分。因此,我们用它们的双重功能,即使节点更加独特并保存特性的多样化操作器,来扩大聚合操作。这种增强用双通道过滤程序取代集合,理论上有利于丰富节点的表述。实际上,拟议的双通道过滤器很容易与现有的GNNN方法相配合,并采用不同的培训战略,包括光谱和空间(路过)方法。在实验中,我们观察到模型的预期特征和9节点分类任务基线上的显著性增强。