Training keyphrase generation (KPG) models requires a large amount of annotated data, which can be prohibitively expensive and often limited to specific domains. In this study, we first demonstrate that large distribution shifts among different domains severely hinder the transferability of KPG models. We then propose a three-stage pipeline, which gradually guides KPG models' learning focus from general syntactical features to domain-related semantics, in a data-efficient manner. With Domain-general Phrase pre-training, we pre-train Sequence-to-Sequence models with generic phrase annotations that are widely available on the web, which enables the models to generate phrases in a wide range of domains. The resulting model is then applied in the Transfer Labeling stage to produce domain-specific pseudo keyphrases, which help adapt models to a new domain. Finally, we fine-tune the model with limited data with true labels to fully adapt it to the target domain. Our experiment results show that the proposed process can produce good quality keyphrases in new domains and achieve consistent improvements after adaptation with limited in-domain annotated data.
翻译:培训关键词生成模型(KPG)需要大量附加说明的数据,这些数据可能过于昂贵,而且往往局限于特定领域。在本研究中,我们首先表明,不同领域的大规模分布转移严重阻碍了KOG模型的可转让性。然后我们提议一个三阶段管道,以数据效率的方式,将KPG模型的学习重点从一般综合特征逐渐引导到与域有关的语义。随着Domain-general Phlase 预培训,我们预排序列到顺序模型,具有通用语句的模型,可在网上广泛找到,使这些模型能够在广泛的领域生成短语。随后,在传输标签阶段应用该模型,以生成特定域的假关键词句,帮助模型适应新的领域。最后,我们用有限的数据对模型进行微调,用真实标签将其完全适应目标领域。我们的实验结果表明,拟议的进程可以在新领域产生高质量的关键语句,并在适应后以有限的注释数据实现一致的改进。