Over the past 10 years, many recommendation techniques have been based on embedding users and items in latent vector spaces, where the inner product of a (user,item) pair of vectors represents the predicted affinity of the user to the item. A wealth of literature has focused on the various modeling approaches that result in embeddings, and has compared their quality metrics, learning complexity, etc. However, much less attention has been devoted to the issues surrounding productization of an embeddings-based high throughput, low latency recommender system. In particular, how the system might keep up with the changing embeddings as new models are learnt. This paper describes a reference architecture of a high-throughput, large scale recommendation service which leverages a search engine as its runtime core. We describe how the search index and the query builder adapt to changes in the embeddings, which often happen at a different cadence than index builds. We provide solutions for both id-based and feature-based embeddings, as well as for batch indexing and incremental indexing setups. The described system is at the core of a Web content discovery service that serves tens of billions recommendations per day in response to billions of user requests.
翻译:在过去10年中,许多建议技术都基于将用户和物品嵌入潜在矢量空间,即一个矢量组合(用户、项目)的内产物代表了用户对此项的预测亲近性。大量文献侧重于导致嵌入的各种建模方法,并比较了其质量衡量标准、学习复杂性等。然而,对嵌入高吞吐量、低潜伏建议系统的产品化问题的关注却少得多。特别是,该系统如何随着新模型的学习跟上不断变化的嵌入。本文描述了高通量、大型建议服务的参考结构,它利用搜索引擎作为运行时间核心。我们描述了搜索索引和查询构建者如何适应嵌入变化,这些变化往往发生在与指数构建不同的一个条状上。我们为基于身份的和基于特征的嵌入提供了解决方案,以及批量索引和递增索引设置提供了解决方案。所描述的系统的核心是将搜索引擎作为运行时间核心的高级通量建议。我们描述了搜索索引和查询构建者如何适应嵌入中的变化,这些变化通常发生在一个不同的条状上。我们为基于身份的嵌嵌入和基于特征的嵌入系统,以及成索引和递增索引设置设置。所描述的系统是网络内容发现服务的核心,每个每天提供数十亿个用户内容查询请求。