Agentic reinforcement learning (RL) holds great promise for the development of autonomous agents under complex GUI tasks, but its scalability remains severely hampered by the verification of task completion. Existing task verification is treated as a passive, post-hoc process: a verifier (i.e., rule-based scoring script, reward or critic model, and LLM-as-a-Judge) analyzes the agent's entire interaction trajectory to determine if the agent succeeds. Such processing of verbose context that contains irrelevant, noisy history poses challenges to the verification protocols and therefore leads to prohibitive cost and low reliability. To overcome this bottleneck, we propose SmartSnap, a paradigm shift from this passive, post-hoc verification to proactive, in-situ self-verification by the agent itself. We introduce the Self-Verifying Agent, a new type of agent designed with dual missions: to not only complete a task but also to prove its accomplishment with curated snapshot evidences. Guided by our proposed 3C Principles (Completeness, Conciseness, and Creativity), the agent leverages its accessibility to the online environment to perform self-verification on a minimal, decisive set of snapshots. Such evidences are provided as the sole materials for a general LLM-as-a-Judge verifier to determine their validity and relevance. Experiments on mobile tasks across model families and scales demonstrate that our SmartSnap paradigm allows training LLM-driven agents in a scalable manner, bringing performance gains up to 26.08% and 16.66% respectively to 8B and 30B models. The synergizing between solution finding and evidence seeking facilitates the cultivation of efficient, self-verifying agents with competitive performance against DeepSeek V3.1 and Qwen3-235B-A22B.
翻译:智能体强化学习(Agentic RL)在复杂图形用户界面任务下的自主智能体开发方面前景广阔,但其可扩展性仍严重受限于任务完成状态的验证。现有的任务验证被视为一种被动的、事后处理过程:验证器(即基于规则的评分脚本、奖励或评判模型,以及作为评判者的大语言模型)通过分析智能体的完整交互轨迹来判断其是否成功。这种处理包含无关噪声历史的冗长上下文的方法,给验证协议带来了挑战,从而导致高昂成本和低可靠性。为克服这一瓶颈,我们提出SmartSnap,将验证范式从这种被动的、事后验证转变为由智能体自身进行的主动、原位自验证。我们引入了自验证智能体这一新型智能体,其设计具有双重使命:不仅完成任务,还需通过精心筛选的快照证据来证明其完成状态。在我们提出的3C原则(完整性、简洁性、创造性)指导下,该智能体利用其对在线环境的可访问性,在最小化、决定性的快照集合上进行自验证。这些证据将作为通用的大语言模型评判者验证器判断其有效性和相关性的唯一材料。在不同模型系列和规模上的移动端任务实验表明,我们的SmartSnap范式能够以可扩展的方式训练大语言模型驱动的智能体,为80亿和300亿参数模型分别带来高达26.08%和16.66%的性能提升。解决方案寻找与证据寻求之间的协同作用,促进了高效、自验证智能体的培育,其性能可与DeepSeek V3.1和Qwen3-235B-A22B相媲美。