With the rise of online dance-video platforms and rapid advances in AI-generated content (AIGC), music-driven dance generation has emerged as a compelling research direction. Despite substantial progress in related domains such as music-driven 3D dance generation, pose-driven image animation, and audio-driven talking-head synthesis, existing methods cannot be directly adapted to this task. Moreover, the limited studies in this area still struggle to jointly achieve high-quality visual appearance and realistic human motion. Accordingly, we present MACE-Dance, a music-driven dance video generation framework with cascaded Mixture-of-Experts (MoE). The Motion Expert performs music-to-3D motion generation while enforcing kinematic plausibility and artistic expressiveness, whereas the Appearance Expert carries out motion- and reference-conditioned video synthesis, preserving visual identity with spatiotemporal coherence. Specifically, the Motion Expert adopts a diffusion model with a BiMamba-Transformer hybrid architecture and a Guidance-Free Training (GFT) strategy, achieving state-of-the-art (SOTA) performance in 3D dance generation. The Appearance Expert employs a decoupled kinematic-aesthetic fine-tuning strategy, achieving state-of-the-art (SOTA) performance in pose-driven image animation. To better benchmark this task, we curate a large-scale and diverse dataset and design a motion-appearance evaluation protocol. Based on this protocol, MACE-Dance also achieves state-of-the-art performance. Project page: https://macedance.github.io/
翻译:随着在线舞蹈视频平台的兴起以及人工智能生成内容(AIGC)的快速发展,音乐驱动的舞蹈生成已成为一个引人注目的研究方向。尽管在音乐驱动的3D舞蹈生成、姿态驱动的图像动画以及音频驱动的说话头合成等相关领域已取得显著进展,但现有方法无法直接适用于此任务。此外,该领域有限的研究仍难以同时实现高质量的视觉外观和逼真的人体运动。为此,我们提出了MACE-Dance,一个采用级联专家混合(MoE)架构的音乐驱动舞蹈视频生成框架。运动专家执行音乐到3D运动的生成,同时确保运动学合理性和艺术表现力;而外观专家则执行以运动和参考为条件的视频合成,在保持时空一致性的同时保留视觉身份。具体而言,运动专家采用具有BiMamba-Transformer混合架构的扩散模型和免引导训练(GFT)策略,在3D舞蹈生成中实现了最先进的性能。外观专家采用解耦的运动学-美学微调策略,在姿态驱动的图像动画中实现了最先进的性能。为了更好地评估此任务,我们构建了一个大规模、多样化的数据集,并设计了一套运动-外观评估方案。基于此方案,MACE-Dance同样实现了最先进的性能。项目页面:https://macedance.github.io/