Realistic dynamic garments on animated characters have many AR/VR applications. While authoring such dynamic garment geometry is still a challenging task, data-driven simulation provides an attractive alternative, especially if it can be controlled simply using the motion of the underlying character. In this work, we focus on motion guided dynamic 3D garments, especially for loose garments. In a data-driven setup, we first learn a generative space of plausible garment geometries. Then, we learn a mapping to this space to capture the motion dependent dynamic deformations, conditioned on the previous state of the garment as well as its relative position with respect to the underlying body. Technically, we model garment dynamics, driven using the input character motion, by predicting per-frame local displacements in a canonical state of the garment that is enriched with frame-dependent skinning weights to bring the garment to the global space. We resolve any remaining per-frame collisions by predicting residual local displacements. The resultant garment geometry is used as history to enable iterative rollout prediction. We demonstrate plausible generalization to unseen body shapes and motion inputs, and show improvements over multiple state-of-the-art alternatives.
翻译:动画字符上的现实动态服装有许多AR/ VR 应用程序。 在撰写这种动态服装几何仍是一项艰巨的任务的同时, 数据驱动模拟提供了一种有吸引力的替代方法, 尤其是如果它能够仅仅使用基本字符的动作来控制的话。 在这项工作中, 我们侧重于运动引导的动态 3D 服装, 特别是对于松散的服装。 在数据驱动的设置中, 我们首先学习了一种可信的服装外衣外观的基因空间。 然后, 我们学习了对这个空间的映射, 来捕捉运动依赖动态的变形, 取决于服装的先前状态及其与基本身体的相对位置。 从技术上讲, 我们模拟服装动态, 利用输入字符运动来驱动, 预测每个框架外观的外观的外观, 将服装的外观重量放大到全球空间。 我们通过预测剩余局部的外观, 解决任何其余的外观碰撞问题。 由此形成的服装几何几何形状被用来进行迭接式的预测。 我们展示了隐形形状和运动投入的外观, 并展示了多种状态的改进。