Large Language Models demonstrate strong reasoning and generation abilities, yet their behavior in multi-turn tasks often lacks reliability and verifiability. We present a task completion framework that enables LLM-based agents to act under explicit behavioral guidance in environments described by reinforcement learning formalisms with defined observation, action, and reward signals. The framework integrates three components: a lightweight task profiler that selects reasoning and generation strategies, a reasoning module that learns verifiable observation - action mappings, and a generation module that enforces constraint-compliant outputs through validation or deterministic synthesis. We show that as the agent interacts with the environment, these components co-evolve, yielding trustworthy behavior.
翻译:大语言模型展现出强大的推理与生成能力,但其在多轮任务中的行为往往缺乏可靠性与可验证性。本文提出一种任务完成框架,使基于LLM的智能体能够在强化学习形式化描述的环境(包含明确定义的观测、动作与奖励信号)中,遵循显式行为引导进行决策。该框架整合了三个组件:轻量级任务分析器(用于选择推理与生成策略)、可学习可验证观测-动作映射的推理模块,以及通过验证或确定性合成确保输出符合约束的生成模块。研究表明,随着智能体与环境交互,这些组件协同演化,最终产生可信赖的行为表现。