To protect privacy and prevent malicious use of deepfake, current studies propose methods that interfere with the generation process, such as detection and destruction approaches. However, these methods suffer from sub-optimal generalization performance to unseen models and add undesirable noise to the original image. To address these problems, we propose a new problem formulation for deepfake prevention: generating a ``scapegoat image'' by modifying the style of the original input in a way that is recognizable as an avatar by the user, but impossible to reconstruct the real face. Even in the case of malicious deepfake, the privacy of the users is still protected. To achieve this, we introduce an optimization-based editing method that utilizes GAN inversion to discourage deepfake models from generating similar scapegoats. We validate the effectiveness of our proposed method through quantitative and user studies.
翻译:为了保护隐私和防止恶意使用深假,目前的研究提出了干扰生成过程的方法,如探测和销毁方法,然而,这些方法受到看不见模型的不最佳概括性表现的影响,给原始图像增添了不可取的噪音。为了解决这些问题,我们提出了一种用于深假预防的新问题提法:通过改变原始输入的风格来“替罪羊图像”产生“替罪羊图像”,其方式为用户可以识别为“avotar”,但不可能重建真实面貌。即使存在恶意的深假,用户的隐私仍然受到保护。为了做到这一点,我们采用了一种基于优化的编辑方法,利用GAN的转换来阻止深假模型产生类似的替罪羊。我们通过定量和用户研究来验证我们拟议方法的有效性。</s>