Advances in deep learning have made face recognition technologies pervasive. While useful to social media platforms and users, this technology carries significant privacy threats. Coupled with the abundant information they have about users, service providers can associate users with social interactions, visited places, activities, and preferences--some of which the user may not want to share. Additionally, facial recognition models used by various agencies are trained by data scraped from social media platforms. Existing approaches to mitigate these privacy risks from unwanted face recognition result in an imbalanced privacy-utility trade-off to users. In this paper, we address this trade-off by proposing Face-Off, a privacy-preserving framework that introduces strategic perturbations to the user's face to prevent it from being correctly recognized. To realize Face-Off, we overcome a set of challenges related to the black-box nature of commercial face recognition services, and the scarcity of literature for adversarial attacks on metric networks. We implement and evaluate Face-Off to find that it deceives three commercial face recognition services from Microsoft, Amazon, and Face++. Our user study with 423 participants further shows that the perturbations come at an acceptable cost for the users.
翻译:深层次学习的进展使得这些隐私风险被社会媒体平台和用户所普遍接受。虽然这种技术对社交媒体平台和用户有用,但它带来了重大的隐私威胁。除了它们掌握的关于用户的大量信息外,服务提供者可以将用户与社会互动、访问过的地方、活动和偏好联系起来,而用户可能不愿意分享其中一些信息。此外,各机构使用的面部识别模式受到来自社交媒体平台的数据的筛选培训。现有的减轻这些隐私风险的方法由于不想要的面部识别而导致对用户的隐私使用权交易不平衡。在本文中,我们通过提出 " 面对面 " 来解决这一权衡问题,即 " 隐私保护框架 " 向用户面部引入战略干扰以防止其被正确识别。为了实现 " 面对面 ",我们克服了与商业面部识别服务黑盒性质有关的一系列挑战,以及对公制网络进行对抗性攻击的文献匮乏。我们实施和评估 " 面对面 " 面对面 " 以发现它欺骗了来自微软、亚马逊和Face++的三种商业面部面部识别服务。我们的用户研究还表明,423名的用户进一步表明,对用户来说,这种干扰的代价是可以接受的成本。