我们讨论了在推理过程中使用的beam搜索，以及如何使用图Transformer网络（Graph Transformer Network）在训练时对该过程进行建模。图Transformer网络基本上是带有自动微分的加权有限状态自动机，它允许我们将先验编码到图中。有不同类型的加权有限状态和不同的操作，包括并集、克林闭包、交、合成和前向得分。损失函数通常是函数之间的区别。我们可以很容易地实现这些网络使用GTN库。
Neural network based end-to-end Text-to-Speech (TTS) has greatly improved the quality of synthesized speech. While how to use massive spontaneous speech without transcription efficiently still remains an open problem. In this paper, we propose MHTTS, a fast multi-speaker TTS system that is robust to transcription errors and speaking style speech data. Specifically, we introduce a multi-head model and transfer text information from high-quality corpus with manual transcription to spontaneous speech with imperfectly recognized transcription by jointly training them. MHTTS has three advantages: 1) Our system synthesizes better quality multi-speaker voice with faster inference speed. 2) Our system is capable of transferring correct text information to data with imperfect transcription, simulated using corruption, or provided by an Automatic Speech Recogniser (ASR). 3) Our system can utilize massive real spontaneous speech with imperfect transcription and synthesize expressive voice.