个人简介： 陈恩红,中国科技大学教授，博导，国家杰出青年基金获得者，IEEE 高级会员（Senior Member）。2005年入选教育部新世纪优秀人才支持计划。现任中国科学技术大学计算机科学与技术学院副院长，语音及语言信息处理国家工程实验室副主任。教育部计算机类专业教学指导委员会委员，中国计算机学会理事、中国人工智能学会理事，中国计算机学会人工智能与模式识别专委会委员、数据库专委会委员、大数据专家委员会委员，中国人工智能学会知识工程与分布智能专业委员会副主任委员、机器学习专委会委员。
Inductive transfer learning has had a big impact on computer vision and NLP domains but has not been used in the area of recommender systems. Even though there has been a large body of research on generating recommendations based on modeling user-item interaction sequences, few of them attempt to represent and transfer these models for serving downstream tasks where only limited data exists. In this paper, we delve on the task of effectively learning a single user representation that can be applied to a diversity of tasks, from cross-domain recommendations to user profile predictions. Fine-tuning a large pre-trained network and adapting it to downstream tasks is an effective way to solve such tasks. However, fine-tuning is parameter inefficient considering that an entire model needs to be re-trained for every new task. To overcome this issue, we develop a parameter efficient transfer learning architecture, termed as PeterRec, which can be configured on-the-fly to various downstream tasks. Specifically, PeterRec allows the pre-trained parameters to remain unaltered during fine-tuning by injecting a series of re-learned neural networks, which are small but as expressive as learning the entire network. We perform extensive experimental ablation to show the effectiveness of the learned user representation in five downstream tasks. Moreover, we show that PeterRec performs efficient transfer learning in multiple domains, where it achieves comparable or sometimes better performance relative to fine-tuning the entire model parameters. Codes and datasets are available at https://github.com/fajieyuan/sigir2020_peterrec.