We present Step-Audio-EditX, the first open-source LLM-based audio model excelling at expressive and iterative audio editing encompassing emotion, speaking style, and paralinguistics alongside robust zero-shot text-to-speech (TTS) capabilities. Our core innovation lies in leveraging only large-margin synthetic data, which circumvents the need for embedding-based priors or auxiliary modules. This large-margin learning approach enables both iterative control and high expressivity across voices, and represents a fundamental pivot from the conventional focus on representation-level disentanglement. Evaluation results demonstrate that Step-Audio-EditX surpasses both MiniMax-2.6-hd and Doubao-Seed-TTS-2.0 in emotion editing and other fine-grained control tasks.
翻译:我们提出了Step-Audio-EditX,这是首个基于大型语言模型的开源音频模型,在情感、说话风格和副语言特征方面表现出卓越的表达性和迭代音频编辑能力,同时具备鲁棒的零样本文本转语音功能。我们的核心创新在于仅利用大间隔合成数据,从而避免了基于嵌入的先验知识或辅助模块的需求。这种大间隔学习方法实现了跨语音的迭代控制和高表达性,并代表了从传统关注表示层解耦的根本性转变。评估结果表明,Step-Audio-EditX在情感编辑及其他细粒度控制任务中均超越了MiniMax-2.6-hd和Doubao-Seed-TTS-2.0。