While large language models (LLMs) achieve strong performance in recommendation, they face challenges in continual learning as users, items, and user preferences evolve over time. Existing LoRA-based continual methods primarily focus on preserving performance on previous tasks, but this overlooks the unique nature of recommendation: the goal is not to predict past preferences, and outdated preferences can even harm performance when current interests shift significantly. To address this, we propose PESO (Proximally rEgularized Single evolving lOra, a continual adaptation method for LoRA in recommendation. PESO introduces a proximal regularizer that anchors the current adapter to its most recent frozen state, enabling the model to flexibly balance adaptation and preservation, and to better capture recent user behaviors. Theoretically, we show that this proximal design provides data-aware, direction-wise guidance in the LoRA subspace. Empirically, PESO consistently outperforms existing LoRA-based continual learning methods.
翻译:尽管大语言模型(LLMs)在推荐任务中展现出强大的性能,但随着用户、物品及用户偏好随时间演变,它们在持续学习方面面临挑战。现有的基于LoRA的持续学习方法主要侧重于保持先前任务的性能,但这忽视了推荐任务的独特性质:目标并非预测过去的偏好,且当当前兴趣发生显著变化时,过时的偏好甚至可能损害性能。为解决这一问题,我们提出了PESO(邻近正则化的单一演化LoRA),一种用于推荐系统中LoRA的持续适配方法。PESO引入了一个邻近正则化器,将当前适配器锚定在其最近的冻结状态,使模型能够灵活平衡适应与保留,并更好地捕捉近期用户行为。理论上,我们证明了这种邻近设计在LoRA子空间中提供了数据感知、方向性的指导。实证结果表明,PESO在持续学习任务中始终优于现有的基于LoRA的方法。