Disentangling the content and style in the latent space is prevalent in unpaired text style transfer. However, two major issues exist in most of the current neural models. 1) It is difficult to completely strip the style information from the semantics for a sentence. 2) The recurrent neural network (RNN) based encoder and decoder, mediated by the latent representation, cannot well deal with the issue of the long-term dependency, resulting in poor preservation of non-stylistic semantic content. In this paper, we propose the Style Transformer, which makes no assumption about the latent representation of source sentence and equips the power of attention mechanism in Transformer to achieve better style transfer and better content preservation.
翻译:将潜在空间的内容和风格拆散在未受控制文本样式的传输中很普遍,但是,目前大多数神经模型中存在两个主要问题。 (1) 很难将风格信息完全从语义中剥离出来,用于句子。 (2) 以潜在代表为媒介的经常性神经网络(RNN)以编码器和解密器为基础,无法很好地处理长期依赖性问题,导致非现代语义内容的保存不善。 在本文中,我们建议采用“风格变换器”,它不假定源句的潜在表达方式,并赋予变换器的关注机制力量,以实现更好的风格转移和更好的内容保护。