Building animatable and editable models of clothed humans from raw 3D scans and poses is a challenging problem. Existing reposing methods suffer from the limited expressiveness of Linear Blend Skinning (LBS), require costly mesh extraction to generate each new pose, and typically do not preserve surface correspondences across different poses. In this work, we introduce Invertible Neural Skinning (INS) to address these shortcomings. To maintain correspondences, we propose a Pose-conditioned Invertible Network (PIN) architecture, which extends the LBS process by learning additional pose-varying deformations. Next, we combine PIN with a differentiable LBS module to build an expressive and end-to-end Invertible Neural Skinning (INS) pipeline. We demonstrate the strong performance of our method by outperforming the state-of-the-art reposing techniques on clothed humans and preserving surface correspondences, while being an order of magnitude faster. We also perform an ablation study, which shows the usefulness of our pose-conditioning formulation, and our qualitative results display that INS can rectify artefacts introduced by LBS well. See our webpage for more details: https://yashkant.github.io/invertible-neural-skinning/
翻译:在原始的 3D 扫描和成形的原始 3D 扫描中, 造就了可想象和可编辑的有衣人模型,这是一个具有挑战性的问题。 现有的再处理方法由于线形变形变形的外观作用有限而受到影响。 我们把PIN与不同的LBS模块结合起来,以构建一个直观和端到端的不可逆的内脏变色(INS)管道。 我们在这项工作中,采用不可逆的神经皮肤(INS)来弥补这些缺陷。 为了保持通信,我们建议了一种有装饰条件的不可逆网络(PIN)结构,通过学习更多变形变形变形变形的变形来扩展LBS进程。 其次,我们把PIN与一个不同的LBS模块结合起来, 以建立一个直观和端到端的内脏变色色色外观(INS) 管道。 我们展示了我们的方法的强大性能, 超越了现代人衣着的重新定位技术, 并保存地表层通信, 同时速度更快。 我们还进行一项减缩研究,, 展示了一种减色研究, 展示了我们的造型配置的有用性设计, 和定性结果显示我们的图像。