Large Language Models (LLMs) have demonstrated remarkable capabilities in modern medicine, yet their application in Traditional Chinese Medicine (TCM) remains severely limited by the absence of standardized benchmarks and the scarcity of high-quality training data. To address these challenges, we introduce TCM-Eval, the first dynamic and extensible benchmark for TCM, meticulously curated from national medical licensing examinations and validated by TCM experts. Furthermore, we construct a large-scale training corpus and propose Self-Iterative Chain-of-Thought Enhancement (SI-CoTE) to autonomously enrich question-answer pairs with validated reasoning chains through rejection sampling, establishing a virtuous cycle of data and model co-evolution. Using this enriched training data, we develop ZhiMingTang (ZMT), a state-of-the-art LLM specifically designed for TCM, which significantly exceeds the passing threshold for human practitioners. To encourage future research and development, we release a public leaderboard, fostering community engagement and continuous improvement.
翻译:大型语言模型(LLM)在现代医学领域已展现出卓越能力,但其在中医(TCM)领域的应用仍因缺乏标准化基准测试及高质量训练数据的稀缺而受到严重限制。为应对这些挑战,我们提出了TCM-Eval——首个面向中医领域的动态可扩展基准测试,其内容精选自国家执业医师资格考试并经中医专家验证。此外,我们构建了大规模训练语料库,并提出自迭代思维链增强(SI-CoTE)方法,通过拒绝采样自主生成带有已验证推理链的问答对,从而建立数据与模型协同进化的良性循环。利用这一增强的训练数据,我们开发了专为中医设计的先进大型语言模型“知命堂”(ZMT),其性能显著超越人类执业医师的合格阈值。为促进未来研究与发展,我们发布了公开排行榜,以推动社区参与和持续改进。