Controlling the generative model to adapt a new domain with limited samples is a difficult challenge and it is receiving increasing attention. Recently, few-shot learning has shown promising process in domain adaptation. However, the texts generated by few-shot learning are typically devoid of linguistic diversity. To address this shortcoming, we frame the adaptation of text generation systems as a reinforcement learning problem and provide a new approach to make text generation models easily adaptable to target domain with the minimal amount of in-domain data. Experimental results on five target domains in two few-shot configurations demonstrate that our method significantly outperforms domain adaptation when very few in-domain samples are available.
翻译:控制基因模型以适应具有有限样本的新领域是一项困难的挑战,它正日益受到注意。最近,少见的学习显示在领域适应方面有希望的过程。然而,少见的学习产生的文本通常缺乏语言多样性。为了解决这一缺陷,我们将文本生成系统的调整作为一个强化学习问题,并提供一种新的方法,使文本生成模型以最小量的域内数据很容易调整为目标域。两个少见的组合中五个目标域的实验结果表明,在很少有域内样本的情况下,我们的方法大大优于域内适应。