Large Language Models have emerged as a promising approach for graph learning due to their powerful reasoning capabilities. However, existing methods exhibit systematic performance degradation on structurally important nodes such as bridges and hubs. We identify the root cause of these limitations. Current approaches encode graph topology into static features but lack reasoning scaffolds to transform topological patterns into role-based interpretations. This limitation becomes critical in zero-shot scenarios where no training data establishes structure-semantics mappings. To address this gap, we propose DuoGLM, a training-free dual-perspective framework for structure-aware graph reasoning. The local perspective constructs relation-aware templates capturing semantic interactions between nodes and neighbors. The global perspective performs topology-to-role inference to generate functional descriptions of structural positions. These complementary perspectives provide explicit reasoning mechanisms enabling LLMs to distinguish topologically similar but semantically different nodes. Extensive experiments across eight benchmark datasets demonstrate substantial improvements. DuoGLM achieves 14.3\% accuracy gain in zero-shot node classification and 7.6\% AUC improvement in cross-domain transfer compared to existing methods. The results validate the effectiveness of explicit role reasoning for graph understanding with LLMs.
翻译:大语言模型因其强大的推理能力,已成为图学习领域一种前景广阔的方法。然而,现有方法在桥梁节点和枢纽节点等结构重要性节点上表现出系统性的性能下降。我们揭示了这些局限性的根本原因。当前方法将图拓扑编码为静态特征,但缺乏将拓扑模式转化为基于角色的解释的推理框架。在零样本场景下,由于没有训练数据来建立结构-语义映射,这一局限性变得尤为关键。为弥补这一不足,我们提出了DuoGLM,一种免训练的双视角框架,用于实现结构感知的图推理。局部视角构建关系感知模板,以捕获节点与其邻居之间的语义交互。全局视角执行拓扑到角色的推理,以生成对结构位置的功能性描述。这些互补的视角提供了显式的推理机制,使大语言模型能够区分拓扑相似但语义不同的节点。在八个基准数据集上的大量实验证明了显著的性能提升。与现有方法相比,DuoGLM在零样本节点分类任务中实现了14.3%的准确率提升,在跨领域迁移任务中实现了7.6%的AUC提升。实验结果验证了显式角色推理对于大语言模型理解图结构的有效性。