Existing evaluation of Large Language Models (LLMs) on static benchmarks is vulnerable to data contamination and leaderboard overfitting, critical issues that obscure true model capabilities. To address this, we introduce LLMEval-3, a framework for dynamic evaluation of LLMs. LLMEval-3 is built on a proprietary bank of 220k graduate-level questions, from which it dynamically samples unseen test sets for each evaluation run. Its automated pipeline ensures integrity via contamination-resistant data curation, a novel anti-cheating architecture, and a calibrated LLM-as-a-judge process achieving 90% agreement with human experts, complemented by a relative ranking system for fair comparison. An 20-month longitudinal study of nearly 50 leading models reveals a performance ceiling on knowledge memorization and exposes data contamination vulnerabilities undetectable by static benchmarks. The framework demonstrates exceptional robustness in ranking stability and consistency, providing strong empirical validation for the dynamic evaluation paradigm. LLMEval-3 offers a robust and credible methodology for assessing the true capabilities of LLMs beyond leaderboard scores, promoting the development of more trustworthy evaluation standards.
翻译:现有基于静态基准的大型语言模型(LLMs)评估易受数据污染和排行榜过拟合的影响,这些关键问题掩盖了模型的真实能力。为解决此问题,我们提出了LLMEval-3,一个用于动态评估LLMs的框架。LLMEval-3基于一个包含22万道研究生级别试题的专有题库构建,每次评估运行时从中动态采样未见过的测试集。其自动化流程通过抗污染数据管理、新颖的反作弊架构,以及一个与人类专家达成90%一致性的校准化LLM-as-a-judge评判过程,确保了评估的完整性,并辅以相对排名系统以实现公平比较。一项对近50个领先模型进行的为期20个月的纵向研究揭示了知识记忆的性能上限,并暴露了静态基准无法检测的数据污染漏洞。该框架在排名稳定性和一致性方面表现出卓越的稳健性,为动态评估范式提供了强有力的实证验证。LLMEval-3提供了一种超越排行榜分数的、稳健且可信的评估方法,用于衡量LLMs的真实能力,从而推动更可信赖的评估标准的发展。