Sequential recommendations (SR) with transformer-based architectures are widely adopted in real-world applications, where SR models require frequent retraining to adapt to ever-changing user preferences. However, training transformer-based SR models often encounters a high computational cost associated with scoring extensive item catalogs, often exceeding thousands of items. This occurs mainly due to the use of cross-entropy loss, where peak memory scales proportionally to catalog size, batch size, and sequence length. Recognizing this, practitioners in the field of recommendation systems typically address memory consumption by integrating the cross-entropy (CE) loss with negative sampling, thereby reducing the explicit memory demands of the final layer. However, a small number of negative samples would degrade model performance, and as we demonstrate in our work, increasing the number of negative samples and the batch size further improves the model's performance, but rapidly starts to exceed industrial GPUs' size (~40Gb). In this work, we introduce the CCE- method, which offers a GPU-efficient implementation of the CE loss with negative sampling. Our method accelerates training by up to two times while reducing memory consumption by more than 10 times. Leveraging the memory savings afforded by using CCE- for model training, it becomes feasible to enhance its accuracy on datasets with a large item catalog compared to those trained with original PyTorch-implemented loss functions. Finally, we perform an analysis of key memory-related hyperparameters and highlight the necessity of a delicate balance among these factors. We demonstrate that scaling both the number of negative samples and batch size leads to better results rather than maximizing only one of them. To facilitate further adoption of CCE-, we release a Triton kernel that efficiently implements the proposed method.
翻译:基于Transformer架构的序列推荐模型在现实应用中已被广泛采用,这些模型需要频繁重新训练以适应不断变化的用户偏好。然而,训练基于Transformer的序列推荐模型通常面临与评估大规模商品目录相关的高计算成本,目录规模常超过数千个商品。这主要源于交叉熵损失函数的使用,其峰值内存消耗与商品目录大小、批次大小和序列长度成比例增长。认识到这一点,推荐系统领域的从业者通常通过将交叉熵损失与负采样相结合来应对内存消耗问题,从而降低输出层的显式内存需求。然而,过少的负采样会降低模型性能,而我们的研究表明,增加负采样数量和批次大小虽能进一步提升模型性能,但会迅速超出工业级GPU的显存容量(约40GB)。在本研究中,我们提出了CCE-方法,该方法提供了具有GPU高效实现的交叉熵损失负采样方案。我们的方法将训练速度提升至多两倍,同时将内存消耗降低超过十倍。利用CCE-方法节省的内存资源,相比使用原始PyTorch实现的损失函数,在具有大规模商品目录的数据集上训练模型可获得更高的准确率。最后,我们对关键的内存相关超参数进行了分析,并强调了这些因素之间需要精细平衡的必要性。我们证明同时扩展负采样数量和批次大小能取得更好的结果,而非仅最大化其中单一变量。为促进CCE-方法的进一步应用,我们发布了高效实现该方法的Triton内核。