DeepFake face swapping enables highly realistic identity forgeries, posing serious privacy and security risks. A common defence embeds invisible perturbations into images, but these are fragile and often destroyed by basic transformations such as compression or resizing. In this paper, we first conduct a systematic analysis of 30 transformations across six categories and show that protection robustness is highly sensitive to the choice of training transformations, making the standard Expectation over Transformation (EOT) with uniform sampling fundamentally suboptimal. Motivated by this, we propose Expectation Over Learned distribution of Transformation (EOLT), the framework to treat transformation distribution as a learnable component rather than a fixed design choice. Specifically, EOLT employs a policy network that learns to automatically prioritize critical transformations and adaptively generate instance-specific perturbations via reinforcement learning, enabling explicit modeling of defensive bottlenecks while maintaining broad transferability. Extensive experiments demonstrate that our method achieves substantial improvements over state-of-the-art approaches, with 26% higher average robustness and up to 30% gains on challenging transformation categories.
翻译:深度伪造人脸交换技术能够生成高度逼真的身份伪造内容,对隐私与安全构成严重威胁。一种常见的防御方法是在图像中嵌入不可见扰动,但这些扰动通常较为脆弱,易被压缩或缩放等基础变换破坏。本文首先对六类共30种变换进行了系统性分析,结果表明保护鲁棒性对训练变换的选择高度敏感,使得采用均匀采样的标准变换期望(EOT)方法本质上存在次优性。基于此,我们提出了学习变换分布期望(EOLT)框架,将变换分布视为可学习组件而非固定设计选择。具体而言,EOLT采用策略网络,通过强化学习自动优先处理关键变换并自适应生成实例特异性扰动,从而在保持广泛可迁移性的同时显式建模防御瓶颈。大量实验表明,本方法相较于现有先进方法取得显著提升,平均鲁棒性提高26%,在挑战性变换类别上最高可获得30%的性能增益。