Recommender systems play a crucial role in our daily lives. Feed streaming mechanism has been widely used in the recommender system, especially on the mobile Apps. The feed streaming setting provides users the interactive manner of recommendation in never-ending feeds. In such an interactive manner, a good recommender system should pay more attention to user stickiness, which is far beyond classical instant metrics, and typically measured by {\bf long-term user engagement}. Directly optimizing the long-term user engagement is a non-trivial problem, as the learning target is usually not available for conventional supervised learning methods. Though reinforcement learning~(RL) naturally fits the problem of maximizing the long term rewards, applying RL to optimize long-term user engagement is still facing challenges: user behaviors are versatile and difficult to model, which typically consists of both instant feedback~(e.g. clicks, ordering) and delayed feedback~(e.g. dwell time, revisit); in addition, performing effective off-policy learning is still immature, especially when combining bootstrapping and function approximation. To address these issues, in this work, we introduce a reinforcement learning framework --- FeedRec to optimize the long-term user engagement. FeedRec includes two components: 1)~a Q-Network which designed in hierarchical LSTM takes charge of modeling complex user behaviors, and 2)~an S-Network, which simulates the environment, assists the Q-Network and voids the instability of convergence in policy learning. Extensive experiments on synthetic data and a real-world large scale data show that FeedRec effectively optimizes the long-term user engagement and outperforms state-of-the-arts.
翻译:推荐人系统在我们日常生活中发挥着关键作用。 种子流机制在推荐人系统中被广泛使用, 特别是在移动应用程序中。 反馈流环境为用户提供了永无止止止的反馈中建议的互动方式。 以这种互动方式, 良好的推荐人系统应该更多地关注用户粘性, 这远远超出了经典的瞬间测量标准, 并且通常以 & bf 长期用户参与度衡量 。 直接优化长期用户参与是一个非三重问题, 因为常规监管学习方法通常不提供学习目标 。 尽管强化学习 ~ (RL) 自然适应了将长期不稳定性最大化、应用RL 优化长期用户参与度的难题。 在这项工作中,我们引入了一个强化的用户行为和模型, 通常包括即时反馈~ (e. 点击, 订购) 和延迟反馈 ~ (e. hold) 时间, 重新审视; 此外, 开展有效的非政策性学习仍然是不成熟的, 特别是当将鞋运行和功能近似时。 为了解决这些问题, 在这项工作中,我们引入了一个强化的用户系统级政策框架, 将S- helf- serveral reforformastrational