Real-time, streaming interactive avatars represent a critical yet challenging goal in digital human research. Although diffusion-based human avatar generation methods achieve remarkable success, their non-causal architecture and high computational costs make them unsuitable for streaming. Moreover, existing interactive approaches are typically limited to head-and-shoulder region, limiting their ability to produce gestures and body motions. To address these challenges, we propose a two-stage autoregressive adaptation and acceleration framework that applies autoregressive distillation and adversarial refinement to adapt a high-fidelity human video diffusion model for real-time, interactive streaming. To ensure long-term stability and consistency, we introduce three key components: a Reference Sink, a Reference-Anchored Positional Re-encoding (RAPR) strategy, and a Consistency-Aware Discriminator. Building on this framework, we develop a one-shot, interactive, human avatar model capable of generating both natural talking and listening behaviors with coherent gestures. Extensive experiments demonstrate that our method achieves state-of-the-art performance, surpassing existing approaches in generation quality, real-time efficiency, and interaction naturalness. Project page: https://streamavatar.github.io .
翻译:实时、流式的交互式数字人是数字人研究中至关重要但极具挑战性的目标。尽管基于扩散的人体数字人生成方法取得了显著成功,但其非因果架构和高计算成本使其不适用于流式场景。此外,现有的交互式方法通常局限于头肩区域,限制了其生成手势和身体动作的能力。为应对这些挑战,我们提出了一个两阶段自回归适应与加速框架,该框架应用自回归蒸馏和对抗性精炼,将一个高保真人体视频扩散模型适配用于实时、交互式流式生成。为确保长期稳定性和一致性,我们引入了三个关键组件:参考汇、参考锚定位置重编码策略以及一致性感知判别器。基于此框架,我们开发了一个单次交互式人体数字人模型,能够生成兼具自然说话与倾听行为以及连贯手势的动作。大量实验表明,我们的方法实现了最先进的性能,在生成质量、实时效率和交互自然度方面均超越了现有方法。项目页面:https://streamavatar.github.io 。