1. bookVolumen 2021 (2021): Heft 3 (July 2021)
16 Apr 2015
4 Hefte pro Jahr
access type Uneingeschränkter Zugang

FoggySight: A Scheme for Facial Lookup Privacy

Online veröffentlicht: 27 Apr 2021
Volumen & Heft: Volumen 2021 (2021) - Heft 3 (July 2021)
Seitenbereich: 204 - 226
Eingereicht: 30 Nov 2020
Akzeptiert: 16 Mar 2021
16 Apr 2015
4 Hefte pro Jahr

Advances in deep learning algorithms have enabled better-than-human performance on face recognition tasks. In parallel, private companies have been scraping social media and other public websites that tie photos to identities and have built up large databases of labeled face images. Searches in these databases are now being offered as a service to law enforcement and others and carry a multitude of privacy risks for social media users. In this work, we tackle the problem of providing privacy from such face recognition systems. We propose and evaluate FoggySight, a solution that applies lessons learned from the adversarial examples literature to modify facial photos in a privacy-preserving manner before they are uploaded to social media. FoggySight’s core feature is a community protection strategy where users acting as protectors of privacy for others upload decoy photos generated by adversarial machine learning algorithms. We explore different settings for this scheme and find that it does enable protection of facial privacy – including against a facial recognition service with unknown internals.

[1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL http://tensorflow.org/. Software available from tensorflow.org.Search in Google Scholar

[2] Timo Ahonen, Abdenour Hadid, and Matti Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE transactions on pattern analysis and machine intelligence, 28(12):2037–2041, 2006.Search in Google Scholar

[3] Kendra Albert, Jon Penney, Bruce Schneier, and Ram Shankar Siva Kumar. Politics of adversarial machine learning. In Towards Trustworthy ML: Rethinking Security and Privacy for ML Workshop, Eighth International Conference on Learning Representations (ICLR), 2020.10.2139/ssrn.3547322Search in Google Scholar

[4] Anish Athalye, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples. arXiv preprint arXiv:1707.07397, 2017.Search in Google Scholar

[5] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.Search in Google Scholar

[6] Manuele Bicego, Andrea Lagorio, Enrico Grosso, and Massimo Tistarelli. On the use of sift features for face authentication. In 2006 Conference on Computer Vision and Pattern Recognition Workshop (CVPRW’06), pages 35–35. IEEE, 2006.Search in Google Scholar

[7] Vicki Bruce and Andy Young. Understanding face recognition. British journal of psychology, 77(3):305–327, 1986.10.1111/j.2044-8295.1986.tb02199.x3756376Search in Google Scholar

[8] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pages 67–74. IEEE, 2018.10.1109/FG.2018.00020Search in Google Scholar

[9] Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 3–14, 2017.10.1145/3128572.3140444Search in Google Scholar

[10] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39–57. IEEE, 2017.10.1109/SP.2017.49Search in Google Scholar

[11] Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.Search in Google Scholar

[12] Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John Dickerson, Gavin Taylor, and Tom Goldstein. Lowkey: Leveraging adversarial attacks to protect social media users from facial recognition. arXiv preprint arXiv:2101.07922, 2021.Search in Google Scholar

[13] Tom Chivers. Facial recognition. . . coming to a supermarket near you. The Guardian, August 2019. URL https://www.theguardian.com/technology/2019/aug/04/facial-recognition-supermarket-facewatch-ai-artificial-intelligence-civil-liberties.Search in Google Scholar

[14] Pieter Delobelle, Paul Temple, Gilles Perrouin, Benoît Frénay, Patrick Heymans, and Bettina Berendt. Ethical adversaries: Towards mitigating unfairness with adversarial machine learning. arXiv preprint arXiv:2005.06852, 2020.Search in Google Scholar

[15] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4690–4699, 2019.10.1109/CVPR.2019.00482Search in Google Scholar

[16] Yinpeng Dong, Hang Su, Baoyuan Wu, Zhifeng Li, Wei Liu, Tong Zhang, and Jun Zhu. Efficient decision-based black-box adversarial attacks on face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7714–7722, 2019.10.1109/CVPR.2019.00790Search in Google Scholar

[17] EFF. Law enforcement use of face recognition systems threatens civil liberties, disproportionately affects people of color: Eff report. February 2018. URL https://www.eff.org/press/releases/law-enforcement-use-face-recognition-systems-threatens-civil-liberties.Search in Google Scholar

[18] Zekeriya Erkin, Martin Franz, Jorge Guajardo, Stefan Katzenbeisser, Inald Lagendijk, and Tomas Toft. Privacy-preserving face recognition. In International symposium on privacy enhancing technologies symposium, pages 235–253. Springer, 2009.10.1007/978-3-642-03168-7_14Search in Google Scholar

[19] Matt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 1322–1333, 2015.10.1145/2810103.2813677Search in Google Scholar

[20] Chuhan Gao, Varun Chandrasekaran, Kassem Fawaz, and Somesh Jha. Face-off: Adversarial face obfuscation. arXiv preprint arXiv:2003.08861, 2020.Search in Google Scholar

[21] Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, and Tom Goldstein. Data security for machine learning: Data poisoning, backdoor attacks, and defenses. arXiv preprint arXiv:2012.10544, 2020.Search in Google Scholar

[22] Patrick Grother, Mei Ngan, and Kayee Hanaoka. Ongoing face recognition vendor test (frvt) part 2: Identification. National Institute of Standards and Technology, Tech. Rep, 2018. URL https://nvlpubs.nist.gov/nistpubs/ir/2018/NIST.IR.8238.pdf.10.6028/NIST.IR.8238Search in Google Scholar

[23] Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.Search in Google Scholar

[24] Xiaofei He, Shuicheng Yan, Yuxiao Hu, Partha Niyogi, and Hong-Jiang Zhang. Face recognition using laplacianfaces. IEEE transactions on pattern analysis and machine intelligence, 27(3):328–340, 2005.10.1109/TPAMI.2005.5515747789Search in Google Scholar

[25] Rebecca Heilweil. The world’s scariest facial recognition company explained. Vox, May 2020. URL https://www.vox.com/recode/2020/2/11/21131991/clearview-ai-facial-recognition-database-law-enforcement.Search in Google Scholar

[26] Alexander Hermans, Lucas Beyer, and Bastian Leibe. In defense of the triplet loss for person re-identification. arXiv preprint arXiv:1703.07737, 2017.Search in Google Scholar

[27] Kashmir Hill, Jennifer Valentino-DeVries, Gabriel J.X. Dance, and Aaron Krolik. The secretive company that might end privacy as we know it. New York Times, Jan 2020. URL https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html.Search in Google Scholar

[28] Daniel C Howe and Helen Nissenbaum. Engineering privacy and protest: A case study of adnauseam. In IWPE@ SP, pages 57–64, 2017.Search in Google Scholar

[29] Junlin Hu, Jiwen Lu, and Yap-Peng Tan. Discriminative deep metric learning for face verification in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1875–1882, 2014.Search in Google Scholar

[30] Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, and Aleksander Madry. Adversarial examples are not bugs, they are features. In Advances in Neural Information Processing Systems, pages 125–136, 2019.Search in Google Scholar

[31] Jinyuan Jia and Neil Zhenqiang Gong. Attriguard: A practical defense against attribute inference attacks via adversarial machine learning. In 27th {USENIX} Security Symposium ({USENIX} Security 18), pages 513–529, 2018.Search in Google Scholar

[32] Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. Certified robustness for top-k predictions against adversarial perturbations via randomized smoothing. arXiv preprint arXiv:1912.09899, 2019.Search in Google Scholar

[33] Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. Memguard: Defending against black-box membership inference attacks via adversarial examples. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pages 259–274, 2019.Search in Google Scholar

[34] Ira Kemelmacher-Shlizerman, Steven M Seitz, Daniel Miller, and Evan Brossard. The megaface benchmark: 1 million faces for recognition at scale. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4873–4882, 2016.10.1109/CVPR.2016.527Search in Google Scholar

[35] Stepan Komkov and Aleksandr Petiushko. Advhat: Real-world adversarial attack on arcface face id system. arXiv preprint arXiv:1908.08705, 2019.Search in Google Scholar

[36] Tao Li and Lei Lin. Anonymousnet: Natural face deidentification with measurable privacy. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.10.1109/CVPRW.2019.00013Search in Google Scholar

[37] Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. Sphereface: Deep hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 212–220, 2017.Search in Google Scholar

[38] Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016.Search in Google Scholar

[39] Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. arXiv, 2017.Search in Google Scholar

[40] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.Search in Google Scholar

[41] Iacopo Masi, Yue Wu, Tal Hassner, and Prem Natarajan. Deep face recognition: A survey. In 2018 31st SIBGRAPI conference on graphics, patterns and images (SIBGRAPI), pages 471–478. IEEE, 2018.10.1109/SIBGRAPI.2018.00067Search in Google Scholar

[42] Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C Lupu, and Fabio Roli. Towards poisoning of deep learning algorithms with back-gradient optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 27–38, 2017.10.1145/3128572.3140451Search in Google Scholar

[43] Seong Joon Oh, Mario Fritz, and Bernt Schiele. Adversarial image perturbation for privacy protection a game theory perspective. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 1491–1500. IEEE, 2017.Search in Google Scholar

[44] Tess Owen. White supremacists built a website to doxx interracial couples — and it’s going to be hard to take down. Vice, May 2020. URL https://www.vice.com/en_us/article/n7ww4w/white-supremacists-built-a-website-to-doxxinterracial-couples-and-its-going-to-be-hard-to-take-down.Search in Google Scholar

[45] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519, 2017.10.1145/3052973.3053009Search in Google Scholar

[46] Omkar M Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. arXiv, 2015.10.5244/C.29.41Search in Google Scholar

[47] Arezoo Rajabi, Rakesh B Bobba, Mike Rosulek, Charles V Wright, and Wu-chi Feng. On the (im) practicality of adversarial perturbation for image privacy. Proceedings on Privacy Enhancing Technologies, 2021(1):85–106.10.2478/popets-2021-0006Search in Google Scholar

[48] Ahmad-Reza Sadeghi, Thomas Schneider, and Immo Wehrenberg. Efficient privacy-preserving face recognition. In International Conference on Information Security and Cryptology, pages 229–244. Springer, 2009.10.1007/978-3-642-14423-3_16Search in Google Scholar

[49] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 815–823, 2015.10.1109/CVPR.2015.7298682Search in Google Scholar

[50] Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances in Neural Information Processing Systems, pages 6103–6113, 2018.Search in Google Scholar

[51] Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Y Zhao. Fawkes: protecting privacy against unauthorized deep learning models. In 29th {USENIX} Security Symposium ({USENIX} Security 20), pages 1589–1604, 2020.Search in Google Scholar

[52] Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security, pages 1528–1540, 2016.10.1145/2976749.2978392Search in Google Scholar

[53] Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. A general framework for adversarial examples with objectives. ACM Transactions on Privacy and Security (TOPS), 22(3):1–30, 2019.10.1145/3317611Search in Google Scholar

[54] Otillia Steadman. Her colleagues watched her onlyfans account at work. when bosses found out, they fired her. BuzzFeedNews. URL https://www.buzzfeednews.com/article/otilliasteadman/mechanic-fired-onlyfans-account-indiana.Search in Google Scholar

[55] Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation by joint identificationverification. In Advances in neural information processing systems, pages 1988–1996, 2014.Search in Google Scholar

[56] Yi Sun, Xiaogang Wang, and Xiaoou Tang. Deeply learned face representations are sparse, selective, and robust. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2892–2900, 2015.10.1109/CVPR.2015.7298907Search in Google Scholar

[57] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.Search in Google Scholar

[58] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016.Search in Google Scholar

[59] Yaniv Taigman, Ming Yang, Marc’Aurelio Ranzato, and Lior Wolf. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1701–1708, 2014.10.1109/CVPR.2014.220Search in Google Scholar

[60] Florian Tramer, Nicholas Carlini, Wieland Brendel, and Alek-sander Madry. On adaptive attacks to adversarial example defenses. arXiv preprint arXiv:2002.08347, 2020.Search in Google Scholar

[61] Matthew Turk and Alex Pentland. Face recognition using eigenfaces. In Proceedings. 1991 IEEE computer society conference on computer vision and pattern recognition, pages 586–587, 1991.Search in Google Scholar

[62] Sarah Theres Völkel, Renate Haeuslschmid, Anna Werner, Heinrich Hussmann, and Andreas Butz. How to trick ai: Users’ strategies for protecting themselves from automatic personality assessment. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–15, 2020.10.1145/3313831.3376877Search in Google Scholar

[63] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5265–5274, 2018.10.1109/CVPR.2018.00552Search in Google Scholar

[64] Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach for deep face recognition. In European conference on computer vision, pages 499–515. Springer, 2016.10.1007/978-3-319-46478-7_31Search in Google Scholar

[65] Eric Wong and J Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851, 2017.Search in Google Scholar

[66] Can Xiang, Chunming Tang, Yunlu Cai, and Qiuxia Xu. Privacy-preserving face recognition with outsourced computation. Soft Computing, 20(9):3735–3744, 2016.10.1007/s00500-015-1759-5Search in Google Scholar

[67] Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. Learning face representation from scratch. arXiv preprint arXiv:1411.7923, 2014.Search in Google Scholar

Empfohlene Artikel von Trend MD

Planen Sie Ihre Fernkonferenz mit Scienceendo