Recent advances in zero-shot text-to-speech (TTS), driven by language models, diffusion models and masked generation, have achieved impressive naturalness in speech synthesis. Nevertheless, stability and fidelity remain key challenges, manifesting as mispronunciations, audible noise, and quality degradation. To address these issues, we introduce Vox-Evaluator, a multi-level evaluator designed to guide the correction of erroneous speech segments and preference alignment for TTS systems. It is capable of identifying the temporal boundaries of erroneous segments and providing a holistic quality assessment of the generated speech. Specifically, to refine erroneous segments and enhance the robustness of the zero-shot TTS model, we propose to automatically identify acoustic errors with the evaluator, mask the erroneous segments, and finally regenerate speech conditioning on the correct portions. In addition, the fine-gained information obtained from Vox-Evaluator can guide the preference alignment for TTS model, thereby reducing the bad cases in speech synthesis. Due to the lack of suitable training datasets for the Vox-Evaluator, we also constructed a synthesized text-speech dataset annotated with fine-grained pronunciation errors or audio quality issues. The experimental results demonstrate the effectiveness of the proposed Vox-Evaluator in enhancing the stability and fidelity of TTS systems through the speech correction mechanism and preference optimization. The demos are shown.
翻译:近年来,由语言模型、扩散模型和掩码生成技术驱动的零样本文本到语音(TTS)系统在语音合成的自然度方面取得了显著进展。然而,稳定性和保真度仍然是关键挑战,具体表现为发音错误、可闻噪声和音质下降。为解决这些问题,我们提出了Vox-Evaluator,一个多层级评估器,旨在指导TTS系统纠正错误语音片段并进行偏好对齐。它能够识别错误片段的时间边界,并对生成的语音提供整体质量评估。具体而言,为精炼错误片段并增强零样本TTS模型的鲁棒性,我们提出利用该评估器自动识别声学错误,掩码错误片段,并最终基于正确部分重新生成语音。此外,从Vox-Evaluator获得的细粒度信息可以指导TTS模型的偏好对齐,从而减少语音合成中的不良案例。由于缺乏适用于Vox-Evaluator的训练数据集,我们还构建了一个合成文本-语音数据集,其中标注了细粒度的发音错误或音频质量问题。实验结果表明,所提出的Vox-Evaluator通过语音纠正机制和偏好优化,有效提升了TTS系统的稳定性和保真度。演示样例已展示。