Graph attention networks (GATs) are powerful tools for analyzing graph data from various real-world scenarios. To learn representations for downstream tasks, GATs generally attend to all neighbors of the central node when aggregating the features. In this paper, we show that a large portion of the neighbors are irrelevant to the central nodes in many real-world graphs, and can be excluded from neighbor aggregation. Taking the cue, we present Selective Attention (SA) and a series of novel attention mechanisms for graph neural networks (GNNs). SA leverages diverse forms of learnable node-node dissimilarity to acquire the scope of attention for each node, from which irrelevant neighbors are excluded. We further propose Graph selective attention networks (SATs) to learn representations from the highly correlated node features identified and investigated by different SA mechanisms. Lastly, theoretical analysis on the expressive power of the proposed SATs and a comprehensive empirical study of the SATs on challenging real-world datasets against state-of-the-art GNNs are presented to demonstrate the effectiveness of SATs.
翻译:图表关注网络(GATs)是分析来自各种现实世界情景的图表数据的有力工具。为了了解下游任务的代表性,GATs通常在汇总特征时会关注中心节点的所有邻国。在本文中,我们表明,大部分邻国与许多现实世界图中的核心节点无关,可以被排除在邻居群集之外。我们借助这个提示,提出了选择性关注(SA)和一系列图表神经网络的新关注机制。SA 利用多种形式的可学习节点-节点差异来获取每个节点的注意范围,而其中排除了无关的邻点。我们进一步提议,“图形选择性关注网络(SATs)”从不同SA机制确定和调查的高度关联节点特征中学习。最后,对拟议SATs的表达力进行了理论分析,并对SATs对挑战性真实世界数据组与最先进的GNNPs进行了全面的实证研究,以展示SATs的有效性。