In recommendation systems, scaling up feature-interaction modules (e.g., Wukong, RankMixer) or user-behavior sequence modules (e.g., LONGER) has achieved notable success. However, these efforts typically proceed on separate tracks, which not only hinders bidirectional information exchange but also prevents unified optimization and scaling. In this paper, we propose OneTrans, a unified Transformer backbone that simultaneously performs user-behavior sequence modeling and feature interaction. OneTrans employs a unified tokenizer to convert both sequential and non-sequential attributes into a single token sequence. The stacked OneTrans blocks share parameters across similar sequential tokens while assigning token-specific parameters to non-sequential tokens. Through causal attention and cross-request KV caching, OneTrans enables precomputation and caching of intermediate representations, significantly reducing computational costs during both training and inference. Experimental results on industrial-scale datasets demonstrate that OneTrans scales efficiently with increasing parameters, consistently outperforms strong baselines, and yields a 5.68% lift in per-user GMV in online A/B tests.
翻译:在推荐系统中,扩展特征交互模块(如Wukong、RankMixer)或用户行为序列模块(如LONGER)已取得显著成功。然而,这些工作通常沿独立路径推进,不仅阻碍了双向信息交换,也妨碍了统一优化与扩展。本文提出OneTrans,一种统一的Transformer骨干网络,可同时执行用户行为序列建模与特征交互。OneTrans采用统一的分词器,将序列属性和非序列属性均转换为单一令牌序列。堆叠的OneTrans块在相似序列令牌间共享参数,同时为非序列令牌分配令牌特定参数。通过因果注意力机制与跨请求KV缓存技术,OneTrans实现了中间表示的事先计算与缓存,显著降低了训练和推理阶段的计算成本。在工业级数据集上的实验结果表明,OneTrans能随参数增加高效扩展,持续超越强基线模型,并在在线A/B测试中实现单用户GMV提升5.68%。