Group-relative reinforcement learning with verifiable rewards (RLVR) often wastes the most informative data it already has the failures. When all rollouts are wrong, gradients stall; when one happens to be correct, the update usually ignores why the others are close-but-wrong, and credit can be misassigned to spurious chains. We present CARE (Contrastive Anchored REflection), a failure-centric post-training framework for multimodal reasoning that turns errors into supervision. CARE combines: (i) an anchored-contrastive objective that forms a compact subgroup around the best rollout and a set of semantically proximate hard negatives, performs within-subgroup z-score normalization with negative-only scaling, and includes an all-negative rescue to prevent zero-signal batches; and (ii) Reflection-Guided Resampling (RGR), a one-shot structured self-repair that rewrites a representative failure and re-scores it with the same verifier, converting near-misses into usable positives without any test-time reflection. CARE improves accuracy and training smoothness while explicitly increasing the share of learning signal that comes from failures. On Qwen2.5-VL-7B, CARE lifts macro-averaged accuracy by 4.6 points over GRPO across six verifiable visual-reasoning benchmarks; with Qwen3-VL-8B it reaches competitive or state-of-the-art results on MathVista and MMMU-Pro under an identical evaluation protocol.
翻译:具有可验证奖励的组相对强化学习(RLVR)常常浪费其已掌握的最具信息量的数据——失败案例。当所有探索轨迹均错误时,梯度更新停滞;当某条轨迹偶然正确时,更新过程通常忽略其他轨迹为何接近正确却仍错误的原因,且可能将奖励错误分配给虚假的因果链。本文提出CARE(对比锚定反思),一种以失败为中心的后训练框架,用于将多模态推理中的错误转化为监督信号。CARE整合了:(i)锚定对比目标——围绕最优轨迹构建紧凑子群及一组语义相近的困难负例,在子群内执行仅对负例缩放的Z分数归一化,并包含全负例救援机制以防止零信号批次;(ii)反思引导重采样(RGR)——一种单次结构化自修复方法,通过重写代表性失败案例并使用相同验证器重新评分,将近乎正确的失败转化为可用正例,且无需任何测试时反思。CARE在提升准确率与训练平滑度的同时,显式增加了来自失败案例的学习信号占比。在Qwen2.5-VL-7B模型上,CARE在六项可验证视觉推理基准测试中的宏平均准确率较GRPO提升4.6个百分点;配合Qwen3-VL-8B模型,在相同评估协议下于MathVista和MMMU-Pro基准上达到竞争性或最先进水平。