Large language models (LLMs) have demonstrated remarkable progress in understanding long-context inputs. However, benchmarks for evaluating the long-context reasoning abilities of LLMs fall behind the pace. Existing benchmarks often focus on a narrow range of tasks or those that do not demand complex reasoning. To address this gap and enable a more comprehensive evaluation of the long-context reasoning capabilities of current LLMs, we propose a new synthetic benchmark, LongReason, which is constructed by synthesizing long-context reasoning questions from a varied set of short-context reasoning questions through context expansion. LongReason consists of 794 multiple-choice reasoning questions with diverse reasoning patterns across three task categories: reading comprehension, logical inference, and mathematical word problems. We evaluate 21 LLMs on LongReason, revealing that most models experience significant performance drops as context length increases. Our further analysis shows that even state-of-the-art LLMs still have significant room for improvement in providing robust reasoning across different tasks. We have open-sourced LongReason under https://huggingface.co/datasets/lz1bytedance/LongReason to support the comprehensive evaluation of LLMs' long-context reasoning capabilities.
翻译:大语言模型(LLMs)在理解长上下文输入方面取得了显著进展。然而,用于评估LLMs长上下文推理能力的基准却落后于发展步伐。现有基准往往局限于狭窄的任务范围,或那些不需要复杂推理的任务。为填补这一空白,并实现对当前LLMs长上下文推理能力更全面的评估,我们提出了一种新的合成基准——LongReason,该基准通过上下文扩展,从多样化的短上下文推理问题中合成出长上下文推理问题。LongReason包含794道多项选择推理题,涵盖阅读理解、逻辑推理和数学应用题三个任务类别,具有多样化的推理模式。我们在LongReason上评估了21个LLMs,结果表明,随着上下文长度的增加,大多数模型的性能显著下降。进一步分析显示,即使是当前最先进的LLMs,在不同任务中提供稳健推理方面仍有显著的改进空间。我们已在https://huggingface.co/datasets/lz1bytedance/LongReason开源LongReason,以支持对LLMs长上下文推理能力的全面评估。