Controlling the model to generate texts of different categories is a challenging task that is receiving increasing attention. Recently, generative adversarial networks (GANs) have shown promising results for category text generation. However, the texts generated by GANs usually suffer from problems of mode collapse and training instability. To avoid the above problems, in this study, inspired by multi-task learning, a novel model called category-aware variational recurrent neural network (CatVRNN) is proposed. In this model, generation and classification tasks are trained simultaneously to generate texts of different categories. The use of multi-task learning can improve the quality of the generated texts, when the classification task is appropriate. In addition, a function is proposed to initialize the hidden state of the CatVRNN to force the model to generate texts of a specific category. Experimental results on three datasets demonstrate that the model can outperform state-of-the-art text generation methods based on GAN in terms of diversity of generated texts.
翻译:控制不同类别文本的模式是一项日益受到关注的艰巨任务。最近,基因对抗网络(GANs)为类别文本的生成展现出有希望的结果。然而,基因对抗网络产生的文本通常会遇到模式崩溃和培训不稳定的问题。为了避免上述问题,在本研究中,在多任务学习的启发下,提议了一种称为类别认知变异的经常性神经网络(CatVRNN)的新模式。在这个模式中,对生成和分类任务同时进行培训,以生成不同类别的文本。多任务学习的使用可以提高生成文本的质量,如果分类工作适宜的话。此外,还提议了一种功能,即启动CatVRNNN的隐藏状态,迫使模型生成特定类别的文本。三个数据集的实验结果显示,该模型在生成文本的多样性方面,能够超越基于GAN的状态-艺术文本生成方法。