Large Language Models (LLMs) are increasingly deployed in time-critical systems, such as robotics, autonomous driving, embodied intelligence, and industrial automation, where generating accurate responses within a given time budget is crucial for decision-making, control, or safety-critical tasks. However, the auto-regressive generation process of LLMs makes it challenging to model and estimate the end-to-end execution time. Furthermore, existing efficient inference methods based on a fixed key-value (KV) cache eviction ratio struggle to adapt to varying tasks with diverse time budgets, where an improper eviction ratio may lead to incomplete inference or a drop in response performance. In this paper, we propose TimeBill, a novel time-budgeted inference framework for LLMs that balances the inference efficiency and response performance. To be more specific, we propose a fine-grained response length predictor (RLP) and an execution time estimator (ETE) to accurately predict the end-to-end execution time of LLMs. Following this, we develop a time-budgeted efficient inference approach that adaptively adjusts the KV cache eviction ratio based on execution time prediction and the given time budget. Finally, through extensive experiments, we demonstrate the advantages of TimeBill in improving task completion rate and maintaining response performance under various overrun strategies.
翻译:大语言模型(LLMs)正日益部署在时间关键型系统中,例如机器人、自动驾驶、具身智能和工业自动化等领域。在这些场景中,在给定时间预算内生成准确响应对于决策、控制或安全关键任务至关重要。然而,LLMs的自回归生成过程使得建模和估计端到端执行时间具有挑战性。此外,现有的基于固定键值(KV)缓存淘汰比的高效推理方法难以适应具有不同时间预算的多样化任务,不恰当的淘汰比可能导致推理不完整或响应性能下降。本文提出TimeBill,一种面向LLMs的新型时间预算推理框架,旨在平衡推理效率与响应性能。具体而言,我们提出了一种细粒度的响应长度预测器(RLP)和执行时间估计器(ETE),以准确预测LLMs的端到端执行时间。在此基础上,我们开发了一种时间预算高效推理方法,能够根据执行时间预测和给定的时间预算自适应调整KV缓存淘汰比。最后,通过大量实验,我们验证了TimeBill在多种超时策略下提高任务完成率并保持响应性能的优势。