Recent advances in large language models (LLMs) have enabled strong reasoning over both structured and unstructured knowledge. When grounded on knowledge graphs (KGs), however, prevailing pipelines rely on heavy neural encoders to embed and score symbolic paths or on repeated LLM calls to rank candidates, leading to high latency, GPU cost, and opaque decisions that hinder faithful, scalable deployment. We propose PathHD, a lightweight and encoder-free KG reasoning framework that replaces neural path scoring with hyperdimensional computing (HDC) and uses only a single LLM call per query. PathHD encodes relation paths into block-diagonal GHRR hypervectors, ranks candidates with blockwise cosine similarity and Top-K pruning, and then performs a one-shot LLM adjudication to produce the final answer together with cited supporting paths. Technically, PathHD is built on three ingredients: (i) an order-aware, non-commutative binding operator for path composition, (ii) a calibrated similarity for robust hypervector-based retrieval, and (iii) a one-shot adjudication step that preserves interpretability while eliminating per-path LLM scoring. On WebQSP, CWQ, and the GrailQA split, PathHD (i) attains comparable or better Hits@1 than strong neural baselines while using one LLM call per query; (ii) reduces end-to-end latency by $40-60\%$ and GPU memory by $3-5\times$ thanks to encoder-free retrieval; and (iii) delivers faithful, path-grounded rationales that improve error diagnosis and controllability. These results indicate that carefully designed HDC representations provide a practical substrate for efficient KG-LLM reasoning, offering a favorable accuracy-efficiency-interpretability trade-off.
翻译:近期大语言模型(LLM)的进展使其在结构化与非结构化知识推理方面展现出强大能力。然而,当基于知识图谱(KG)进行推理时,主流流程仍依赖重型神经编码器对符号路径进行嵌入与评分,或通过重复调用LLM对候选答案排序,导致高延迟、高GPU成本及决策过程不透明,阻碍了可靠且可扩展的部署。本文提出PathHD——一种轻量级、无编码器的知识图谱推理框架,该框架使用超维度计算(HDC)替代神经路径评分,且每个查询仅需单次LLM调用。PathHD将关系路径编码为块对角GHRR超向量,通过块间余弦相似度与Top-K剪枝对候选答案排序,随后执行一次性LLM裁决以生成最终答案并附引证支持路径。技术层面,PathHD基于三个核心要素构建:(i)用于路径组合的、具备顺序感知与非交换特性的绑定算子;(ii)基于超向量的鲁棒检索校准相似度方法;(iii)在保持可解释性的同时消除逐路径LLM评分的单次裁决步骤。在WebQSP、CWQ及GrailQA数据集上的实验表明,PathHD(i)在每查询仅使用一次LLM调用的前提下,达到与强神经基线相当或更优的Hits@1指标;(ii)借助无编码器检索机制,端到端延迟降低$40-60\\%$,GPU内存占用减少$3-5$倍;(iii)提供可靠、基于路径的归因解释,提升了错误诊断与可控性。这些结果表明,精心设计的HDC表示可为高效的KG-LLM推理提供实用基础,在准确性、效率与可解释性之间实现了更优权衡。