Graph Prompt Learning (GPL) has emerged as a promising paradigm that bridges graph pretraining models and downstream scenarios, mitigating label dependency and the misalignment between upstream pretraining and downstream tasks. Although existing GPL studies explore various prompt strategies, their effectiveness and underlying principles remain unclear. We identify two critical limitations: (1) Lack of consensus on underlying mechanisms: Despite current GPLs have advanced the field, there is no consensus on how prompts interact with pretrained models, as different strategies intervene at varying spaces within the model, i.e., input-level, layer-wise, and representation-level prompts. (2) Limited scenario adaptability: Most methods fail to generalize across diverse downstream scenarios, especially under data distribution shifts (e.g., homophilic-to-heterophilic graphs). To address these issues, we theoretically analyze existing GPL approaches and reveal that representation-level prompts essentially function as fine-tuning a simple downstream classifier, proposing that graph prompt learning should focus on unleashing the capability of pretrained models, and the classifier should adapt to downstream scenarios. Based on our findings, we propose UniPrompt, a novel GPL method that adapts any pretrained models, unleashing the capability of pretrained models while preserving the input graph. Extensive experiments demonstrate that our method can effectively integrate with various pretrained models and achieve strong performance across in-domain and cross-domain scenarios.
翻译:图提示学习已成为一种有前景的范式,它弥合了图预训练模型与下游场景之间的鸿沟,缓解了标签依赖以及上游预训练与下游任务之间的错位问题。尽管现有的图提示学习研究探索了多种提示策略,但其有效性及内在原理仍不明确。我们发现了两个关键局限:(1)缺乏对底层机制的共识:尽管当前图提示学习方法推动了该领域发展,但关于提示如何与预训练模型交互尚未形成共识,不同策略在模型内的干预空间存在差异,即输入级、层级和表示级提示。(2)场景适应性有限:大多数方法难以泛化到多样化的下游场景,特别是在数据分布发生变化时(例如从同配图到异配图)。针对这些问题,我们通过理论分析现有图提示学习方法,揭示了表示级提示本质上等同于微调一个简单的下游分类器,并提出图提示学习应聚焦于释放预训练模型的能力,而分类器应适应下游场景。基于这些发现,我们提出了UniPrompt——一种新颖的图提示学习方法,能够适配任意预训练模型,在保留输入图结构的同时充分释放预训练模型的能力。大量实验表明,我们的方法能有效整合多种预训练模型,并在域内与跨域场景中均取得优异性能。