Generating realistic human geometry animations remains a challenging task, as it requires modeling natural clothing dynamics with fine-grained geometric details under limited data. To address these challenges, we propose two novel designs. First, we propose a compact distribution-based latent representation that enables efficient and high-quality geometry generation. We improve upon previous work by establishing a more uniform mapping between SMPL and avatar geometries. Second, we introduce a generative animation model that fully exploits the diversity of limited motion data. We focus on short-term transitions while maintaining long-term consistency through an identity-conditioned design. These two designs formulate our method as a two-stage framework: the first stage learns a latent space, while the second learns to generate animations within this latent space. We conducted experiments on both our latent space and animation model. We demonstrate that our latent space produces high-fidelity human geometry surpassing previous methods ($90\%$ lower Chamfer Dist.). The animation model synthesizes diverse animations with detailed and natural dynamics ($2.2 \times$ higher user study score), achieving the best results across all evaluation metrics.
翻译:生成逼真的人体几何动画仍是一项具有挑战性的任务,因为它需要在有限数据条件下对具有精细几何细节的自然衣物动态进行建模。为应对这些挑战,我们提出了两项创新设计。首先,我们提出了一种基于分布的紧凑潜在表征,能够实现高效且高质量的几何生成。我们通过建立SMPL与虚拟化身几何之间更均匀的映射,改进了先前的工作。其次,我们引入了一种生成式动画模型,该模型充分利用有限运动数据的多样性。我们专注于短期过渡动态,同时通过身份条件化设计保持长期一致性。这两项设计将我们的方法构建为两阶段框架:第一阶段学习潜在空间,第二阶段学习在该潜在空间内生成动画。我们对潜在空间和动画模型均进行了实验验证。实验表明,我们的潜在空间生成的人体几何保真度超越先前方法(倒角距离降低90%)。动画模型合成的多样化动画具有细致自然的动态效果(用户研究评分提升2.2倍),在所有评估指标上均取得最优结果。