Collaborative information from user-item interactions is a fundamental source of signal in successful recommender systems. Recently, researchers have attempted to incorporate this knowledge into large language model-based recommender approaches (LLMRec) to enhance their performance. However, there has been little fundamental analysis of whether LLMs can effectively reason over collaborative information. In this paper, we analyze the ability of LLMs to reason about collaborative information in recommendation tasks, comparing their performance to traditional matrix factorization (MF) models. We propose a simple and effective method to improve LLMs' reasoning capabilities using retrieval-augmented generation (RAG) over the user-item interaction matrix with four different prompting strategies. Our results show that the LLM outperforms the MF model whenever we provide relevant information in a clear and easy-to-follow format, and prompt the LLM to reason based on it. We observe that with this strategy, in almost all cases, the more information we provide, the better the LLM performs.
翻译:用户-物品交互产生的协同信息是成功推荐系统中信号的基本来源。最近,研究者尝试将此类知识融入基于大语言模型的推荐方法(LLMRec)中以提升其性能。然而,对于大语言模型是否能有效推理协同信息,尚缺乏基础性分析。本文分析了大语言模型在推荐任务中推理协同信息的能力,并将其性能与传统的矩阵分解(MF)模型进行比较。我们提出了一种简单有效的方法,通过基于用户-物品交互矩阵的检索增强生成(RAG)结合四种不同的提示策略,以提升大语言模型的推理能力。实验结果表明,只要我们以清晰易懂的格式提供相关信息,并提示大语言模型基于此进行推理,其表现便优于矩阵分解模型。我们观察到,采用此策略时,在几乎所有情况下,提供的信息越多,大语言模型的性能越好。