We present the SCR framework for enhancing the training of graph neural networks (GNNs) with consistency regularization. Regularization is a set of strategies used in Machine Learning to reduce overfitting and improve the generalization ability. However, it is unclear how to best design the generalization strategies in GNNs, as it works in a semi-supervised setting for graph data. The major challenge lies in how to efficiently balance the trade-off between the error from the labeled data and that from the unlabeled data. SCR is a simple yet general framework in which we introduce two strategies of consistency regularization to address the challenge above. One is to minimize the disagreements among the perturbed predictions by different versions of a GNN model. The other is to leverage the Mean Teacher paradigm to estimate a consistency loss between teacher and student models instead of the disagreement of the predictions. We conducted experiments on three large-scale node classification datasets in the Open Graph Benchmark (OGB). Experimental results demonstrate that the proposed SCR framework is a general one that can enhance various GNNs to achieve better performance. Finally, SCR has been the top-1 entry on all three OGB leaderboards as of this submission.
翻译:我们提出SCR框架,以加强对图形神经网络的培训,使其具有一致性; 规范化是机器学习中使用的一套战略,以减少过度适应和提高一般化能力; 然而,由于GNNS在半监督的图表数据设置中运作,因此如何最好地设计GNNs的一般化战略尚不清楚; 重大挑战是如何有效地平衡标签数据错误与未标记数据错误之间的取舍。 SCR是一个简单而笼统的框架,我们在这个框架中采用两种一致性正规化战略,以应对上述挑战。 一种是尽量减少不同版本的GNN模型在周遭预测中的分歧。 另一种是利用“普通教师”模式来估计教师和学生模型之间的一致性损失,而不是对预测的分歧。 我们在公开图表基准(OGB)中进行了三个大型节点分类数据集的实验。 实验结果表明,拟议的SCR框架是一个总框架,可以加强各种GNWs实现更好的业绩。 最后,SCR是所有三个OGB头板提交的所有三个OGB头板上的首要条目。