Large recommender models have extended LLMs as powerful recommenders via encoding or item generation, and recent breakthroughs in LLM reasoning synchronously motivate the exploration of reasoning in recommendation. In this work, we propose R$^2$ec, a unified large recommender model with intrinsic reasoning capability. R$^2$ec introduces a dual-head architecture that supports both reasoning chain generation and efficient item prediction in a single model, significantly reducing inference latency. To overcome the lack of annotated reasoning data, we design RecPO, a reinforcement learning framework that optimizes reasoning and recommendation jointly with a novel fused reward mechanism. Extensive experiments on three datasets demonstrate that R$^2$ec outperforms traditional, LLM-based, and reasoning-augmented recommender baselines, while further analyses validate its competitive efficiency among conventional LLM-based recommender baselines and strong adaptability to diverse recommendation scenarios. Code and checkpoints available at https://github.com/YRYangang/RRec.
翻译:大型推荐模型通过编码或物品生成,已将大型语言模型扩展为强大的推荐器,而近期大型语言模型在推理方面的突破同步激发了推荐系统中推理能力的探索。在本研究中,我们提出了R$^2$ec,一种具备内在推理能力的统一大型推荐模型。R$^2$ec引入了双头架构,支持在单一模型中同时生成推理链并进行高效物品预测,显著降低了推理延迟。为克服标注推理数据的缺乏,我们设计了RecPO,一个强化学习框架,通过新颖的融合奖励机制联合优化推理与推荐。在三个数据集上的大量实验表明,R$^2$ec在传统、基于大型语言模型及推理增强的推荐基线方法中均表现优异,而进一步分析验证了其在传统基于大型语言模型的推荐基线中具有竞争力的效率,以及对多样化推荐场景的强大适应性。代码与检查点可在 https://github.com/YRYangang/RRec 获取。