For few-shot learning, it is still a critical challenge to realize photo-realistic face visually dubbing on high-resolution videos. Previous works fail to generate high-fidelity dubbing results. To address the above problem, this paper proposes a Deformation Inpainting Network (DINet) for high-resolution face visually dubbing. Different from previous works relying on multiple up-sample layers to directly generate pixels from latent embeddings, DINet performs spatial deformation on feature maps of reference images to better preserve high-frequency textural details. Specifically, DINet consists of one deformation part and one inpainting part. In the first part, five reference facial images adaptively perform spatial deformation to create deformed feature maps encoding mouth shapes at each frame, in order to align with the input driving audio and also the head poses of the input source images. In the second part, to produce face visually dubbing, a feature decoder is responsible for adaptively incorporating mouth movements from the deformed feature maps and other attributes (i.e., head pose and upper facial expression) from the source feature maps together. Finally, DINet achieves face visually dubbing with rich textural details. We conduct qualitative and quantitative comparisons to validate our DINet on high-resolution videos. The experimental results show that our method outperforms state-of-the-art works.
翻译:对于几张图片的学习来说,在高分辨率视频上实现光现实面部视觉遮盖仍是一个关键的挑战。 以前的作品未能产生高纤维遮盖结果。 为解决上述问题, 本文建议为高分辨率面部遮盖而建立一个变形的涂鸦网络( DINet) 。 不同于以前依靠多个上层层直接生成潜嵌入层像素的工作, DINet在参考图像地貌图上进行空间变形, 以更好地保存高频文本细节 。 具体地说, DINet 包含一个变形部分和一个插入部分。 在第一部分, 5个参考面部图像适应性地进行空间变形, 以创建每个框架的变形特征编码口型图( DINet), 以便与驱动音频和输入源图像头部相匹配。 在第二部分, 生成视觉变形图和其他属性( e. 头部和上部面面面部面部面面部面部面部面部面部面部面部面部面部面部和面部面部面部图) 。 最后, Net 将实现高质化的图像分析模型, 显示我们的图像的图像分析图, 显示。</s>