VIP内容

图神经网络中的注意力机制旨在将较大的权重分配给重要的邻居节点,以实现更好的表示。但是,人们对图的学习了解得不好,尤其是当图嘈杂时。在本文中,作者提出了一种自监督图注意力网络(SuperGAT),这是一种针对noisy图改进的图注意力模型。自监督的关键是找到数据中存在的监督信息,其中预测边存在和不存在是一个能够编码节点之间关系的监督信息,也被广泛的使用。本文利用这个与自监督任务兼容的两种注意力形式来预测边的存在和缺失。提出的SuperGAT通过对边进行编码,在区分错误link的邻居时会获得更多的表达注意。另外本文发现两个图上的基本属性会影响注意力的形式和自监督的有效性:同构和平均度(homophily and average degree)。这两个图的属性,可为使用哪种注意力设计提供指导。本文对17个现实世界数据集进行的实验表明,该方法可以泛化到其中的15个数据集,并且得到比基线更高的性能。

成为VIP会员查看完整内容
0
25

最新内容

Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing. Such works have mostly focused on group equivariant convolutions, building on the result that group equivariant linear maps are necessarily convolutions. In this work, we extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models. We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. We demonstrate the generality of our approach by showing experimental results that are competitive to baseline methods on a wide range of tasks: shape counting on point clouds, molecular property regression and modelling particle trajectories under Hamiltonian dynamics.

0
0
下载
预览

最新论文

Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing. Such works have mostly focused on group equivariant convolutions, building on the result that group equivariant linear maps are necessarily convolutions. In this work, we extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models. We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. We demonstrate the generality of our approach by showing experimental results that are competitive to baseline methods on a wide range of tasks: shape counting on point clouds, molecular property regression and modelling particle trajectories under Hamiltonian dynamics.

0
0
下载
预览
Top