Large language models (LLMs) exhibit strong semantic understanding, yet struggle when user instructions involve ambiguous or conceptually misaligned terms. We propose the Language Graph Model (LGM) to enhance conceptual clarity by extracting meta-relations-inheritance, alias, and composition-from natural language. The model further employs a reflection mechanism to validate these meta-relations. Leveraging a Concept Iterative Retrieval Algorithm, these relations and related descriptions are dynamically supplied to the LLM, improving its ability to interpret concepts and generate accurate responses. Unlike conventional Retrieval-Augmented Generation (RAG) approaches that rely on extended context windows, our method enables large language models to process texts of any length without the need for truncation. Experiments on standard benchmarks demonstrate that the LGM consistently outperforms existing RAG baselines.
翻译:大语言模型(LLMs)展现出强大的语义理解能力,但在用户指令涉及模糊或概念不对齐的术语时仍存在困难。本文提出语言图模型(LGM),通过从自然语言中提取元关系——继承、别名和组合——来增强概念清晰度。该模型进一步采用反思机制验证这些元关系。借助概念迭代检索算法,这些关系及相关描述被动态提供给大语言模型,从而提升其概念解析与生成准确响应的能力。与依赖扩展上下文窗口的传统检索增强生成(RAG)方法不同,我们的方法使大语言模型能够处理任意长度的文本而无需截断。在标准基准测试上的实验表明,LGM持续优于现有RAG基线方法。