图卷积网络(简称GCN),由Thomas Kpif于2017年在论文Semi-supervised classification with graph convolutional networks中提出。它为图(graph)结构数据的处理提供了一个崭新的思路,将深度学习中常用于图像的卷积神经网络应用到图数据上。

In this paper we cast neural networks defined on graphs as message-passing neural networks (MPNNs) in order to study the distinguishing power of different classes of such models. We are interested in whether certain architectures are able to tell vertices apart based on the feature labels given as input with the graph. We consider two variants of MPNNS: anonymous MPNNs whose message functions depend only on the labels of vertices involved; and degree-aware MPNNs in which message functions can additionally use information regarding the degree of vertices. The former class covers a popular formalisms for computing functions on graphs: graph neural networks (GNN). The latter covers the so-called graph convolutional networks (GCNs), a recently introduced variant of GNNs by Kipf and Welling. We obtain lower and upper bounds on the distinguishing power of MPNNs in terms of the distinguishing power of the Weisfeiler-Lehman (WL) algorithm. Our results imply that (i) the distinguishing power of GCNs is bounded by the WL algorithm, but that they are one step ahead; (ii) the WL algorithm cannot be simulated by "plain vanilla" GCNs but the addition of a trade-off parameter between features of the vertex and those of its neighbours (as proposed by Kipf and Welling themselves) resolves this problem.

0
0
下载
预览

Aspect-level sentiment classification aims to identify the sentiment polarity towards a specific aspect term in a sentence. Most current approaches mainly consider the semantic information by utilizing attention mechanisms to capture the interactions between the context and the aspect term. In this paper, we propose to employ graph convolutional networks (GCNs) on the dependency tree to learn syntax-aware representations of aspect terms. GCNs often show the best performance with two layers, and deeper GCNs do not bring additional gain due to over-smoothing problem. However, in some cases, important context words cannot be reached within two hops on the dependency tree. Therefore we design a selective attention based GCN block (SA-GCN) to find the most important context words, and directly aggregate these information into the aspect-term representation. We conduct experiments on the SemEval 2014 Task 4 datasets. Our experimental results show that our model outperforms the current state-of-the-art.

0
0
下载
预览

Recent reports from industry show that social recommender systems consistently fail in practice. According to the negative findings, the failure is attributed to: (1) a majority of users only have a very limited number of neighbors in social networks and can hardly benefit from relations; (2) social relations are noisy but they are often indiscriminately used; (3) social relations are assumed to be universally applicable to multiple scenarios while they are actually multi-faceted and show heterogeneous strengths in different scenarios. Most existing social recommendation models only consider the homophily in social networks and neglect these drawbacks. In this paper we propose a deep adversarial framework based on graph convolutional networks (GCN) to address these problems. Concretely, for the relation sparsity and noises problems, a GCN-based autoencoder is developed to augment the relation data by encoding high-order and complex connectivity patterns, and meanwhile is optimized subject to the constraint of reconstructing the original social profile to guarantee the validity of new identified neighborhood. After obtaining enough purified social relations for each user, a GCN-based attentive social recommendation module is designed to capture the heterogeneous strengths of social relations. These designs deal with the three problems faced by social recommender systems respectively. Finally, we adopt adversarial training to unify and intensify all components by playing a minimax game and ensure a coordinated effort to enhance social recommendation. Experimental results on multiple open datasets demonstrate the superiority of our framework and the ablation study confirms the importance and effectiveness of each component.

0
0
下载
预览

Graph convolution operators bring the advantages of deep learning to a variety of graph and mesh processing tasks previously deemed out of reach. With their continued success comes the desire to design more powerful architectures, often by adapting existing deep learning techniques to non-Euclidean data. In this paper, we argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning. We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs). We conjecture that, like RBFs, graph convolution layers would benefit from the addition of simple functions to the powerful convolution kernels. We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator. We experimentally demonstrate the effectiveness of our technique and show the improved performance is the consequence of more than the increased number of parameters. Operators equipped with the affine skip connection markedly outperform their base performance on every task we evaluated, i.e., shape reconstruction, dense shape correspondence, and graph classification. We hope our simple and effective approach will serve as a solid baseline and help ease future research in graph neural networks.

0
0
下载
预览

This paper presents a novel, automated, generative adversarial networks (GAN) based synthetic feeder generation mechanism, abbreviated as FeederGAN. FeederGAN digests real feeder models represented by directed graphs via a deep learning framework powered by GAN and graph convolutional networks (GCN). From power system feeder model input files, device connectivity is mapped to the adjacency matrix while device characteristics such as circuit types (i.e., 3-phase, 2-phase, and 1-phase) and component attributes (e.g., length and current ratings) are mapped to the attribute matrix. Then, Wasserstein distance is used to optimize the GAN and GCN is used to discriminate the generated graph from the actual. A greedy method based on graph theory is developed to reconstruct the feeder from the generated adjacency and attribute matrix. Our results show that the generated feeders resemble the actual feeder in both topology and attributes verified by visual inspection and by empirical statistics obtained from actual distribution feeders.

0
0
下载
预览

We present GraphMix, a regularization technique for Graph Neural Network based semi-supervised object classification, leveraging the recent advances in the regularization of classical deep neural networks. Specifically, we propose a unified approach in which we train a fully-connected network jointly with the graph neural network via parameter sharing, interpolation-based regularization, and self-predicted-targets. Our proposed method is architecture agnostic in the sense that it can be applied to any variant of graph neural networks which applies a parametric transformation to the features of the graph nodes. Despite its simplicity, with GraphMix we can consistently improve results and achieve or closely match state-of-the-art performance using even simpler architectures such as Graph Convolutional Networks, across three established graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as three newly proposed datasets : Cora-Full, Co-author-CS and Co-author-Physics.

0
0
下载
预览

Face clustering is an essential tool for exploiting the unlabeled face data, and has a wide range of applications including face annotation and retrieval. Recent works show that supervised clustering can result in noticeable performance gain. However, they usually involve heuristic steps and require numerous overlapped subgraphs, severely restricting their accuracy and efficiency. In this paper, we propose a fully learnable clustering framework without requiring a large number of overlapped subgraphs. Instead, we transform the clustering problem into two sub-problems. Specifically, two graph convolutional networks, named GCN-V and GCN-E, are designed to estimate the confidence of vertices and the connectivity of edges, respectively. With the vertex confidence and edge connectivity, we can naturally organize more relevant vertices on the affinity graph and group them into clusters. Experiments on two large-scale benchmarks show that our method significantly improves clustering accuracy and thus performance of the recognition models trained on top, yet it is an order of magnitude more efficient than existing supervised methods.

0
0
下载
预览
Top