In this work, a robust and efficient text-to-speech system, named Triple M, is proposed for large-scale online application. The key components of Triple M are: 1) A seq2seq model with multi-guidance attention which obtains stable feature generation and robust long sentence synthesis ability by learning from the guidance attention mechanisms. Multi-guidance attention improves the robustness and naturalness of long sentence synthesis without any in-domain performance loss or online service modification. Compared with the our best result obtained by using single attention mechanism (GMM-based attention), the word error rate of long sentence synthesis decreases by 23.5% when multi-guidance attention mechanism is applied. 2) A efficient multi-band multi-time LPCNet, which reduces the computational complexity of LPCNet through combining multi-band and multi-time strategies (from 2.8 to 1.0 GFLOP). Due to these strategies, the vocoder speed is increased by 2.75x on a single CPU without much MOS degradatiaon (4.57 vs. 4.45).
翻译:在这项工作中,为大规模在线应用提议了一个称为Tripple M的强大而高效的文本到语音系统。Tripple M的关键组成部分是:(1) 具有多指导关注的后继2Seq模式,通过学习指导关注机制,获得稳定的地貌生成和强有力的长句合成能力;多指导关注提高长句合成的稳健性和自然性,而不造成任何内部性能损失或在线服务修改。与我们通过使用单一关注机制(基于GMM的注意)取得的最佳结果相比,长句合成的单词误差率在应用多指导关注机制时下降了23.5%。(2) 高效的多波段多时LPCNet,通过多波段和多时战略相结合(从2.8到1.0GFLLOP),降低LPCNet的计算复杂性。由于这些战略,单CPU没有太多MOS degradatiaon(4.57 vs 4.45),电动电码速度增加了2.75x。