Graph classification is a critical research problem in many applications from different domains. In order to learn a graph classification model, the most widely used supervision component is an output layer together with classification loss (e.g.,cross-entropy loss together with softmax or margin loss). In fact, the discriminative information among instances are more fine-grained, which can benefit graph classification tasks. In this paper, we propose the novel Label Contrastive Coding based Graph Neural Network (LCGNN) to utilize label information more effectively and comprehensively. LCGNN still uses the classification loss to ensure the discriminability of classes. Meanwhile, LCGNN leverages the proposed Label Contrastive Loss derived from self-supervised learning to encourage instance-level intra-class compactness and inter-class separability. To power the contrastive learning, LCGNN introduces a dynamic label memory bank and a momentum updated encoder. Our extensive evaluations with eight benchmark graph datasets demonstrate that LCGNN can outperform state-of-the-art graph classification models. Experimental results also verify that LCGNN can achieve competitive performance with less training data because LCGNN exploits label information comprehensively.
翻译:图表分类是不同领域许多应用中的一个关键研究问题。 为了学习图表分类模型, 最广泛使用的监督部分是输出层, 以及分类损失( 例如, 跨体成衣损失, 以及软负轴或差幅损失 ) 。 事实上, 实例中的歧视信息比较细微, 有利于图表分类任务 。 在本文中, 我们提出基于标签标签的标签相对编码图形神经网络( LCGNN), 以更有效和更全面地利用标签信息 。 LCGNN 仍然使用分类损失来确保分类的可调和性。 同时, LCGNN 利用从自我监督学习中得出的拟议 Label 相抗性损失来鼓励试级类内紧凑和类际分离性。 为了推动对比性学习, LCGNNN 引入一个动态标签内存库和动力更新的编码器。 我们用八个基准图形数据集进行的广泛评价表明, LCGNNN 能够超越状态的图表分类模型。 实验结果还证实, LCGNNNC 能够以低竞争性的数据利用LCNNC 。