LOWKEY: LEVERAGING ADVERSARIAL ATTACKS TO PROTECT SOCIAL MEDIA USERS FROM FACIAL RECOGNITION

Abstract

Facial recognition systems are increasingly deployed by private corporations, government agencies, and contractors for consumer services and mass surveillance programs alike. These systems are typically built by scraping social media profiles for user images. Adversarial perturbations have been proposed for bypassing facial recognition systems. However, existing methods fail on full-scale systems and commercial APIs. We develop our own adversarial filter that accounts for the entire image processing pipeline and is demonstrably effective against industrial-grade pipelines that include face detection and large scale databases. Additionally, we release an easy-to-use webtool that significantly degrades the accuracy of Amazon Rekognition and the Microsoft Azure Face Recognition API, reducing the accuracy of each to below 1%.

1. INTRODUCTION

Facial recognition systems (FR) are widely deployed for mass surveillance by government agencies, government contractors, and private companies alike on massive databases of images belonging to private individuals (Hartzog, 2020; Derringer, 2019; Weise & Singer, 2020) . Recently, these systems have been thrust into the limelight in the midst of outrage over invasion into personal life and concerns regarding fairness (Singer, 2018; Lohr, 2018; Cherepanova et al., 2021) . Practitioners populate their databases by hoarding publicly available images from social media outlets, and so users are forced to choose between keeping their images outside of public view or taking their chances with mass surveillance. We develop a tool, LowKey, for protecting users from unauthorized surveillance by leveraging methods from the adversarial attack literature, and make it available to the public as a webtool. * Authors contributed equally. 1

