The AI-Makeup Artist that Covers Your Identity: CLIP2Protect is an AI Model That Uses Text-Guided Makeup to Protect Facial Privacy
The 90s Sci-fi movies are full of computers that show this rotating profile of a person and display all types of information about the person. This face-recognition technology is expected to be so advanced that no data about you can stay hidden from the big-brother.
We cannot claim they were wrong, unfortunately. Face recognition technology has witnessed significant advancements with the advent of deep learning-based systems, revolutionizing various applications and industries. Whether this revolution was something good or bad is a topic for another post, but the reality is that our faces can be linked to so much data about us in our world. In this case, privacy plays a crucial role.
In response to these concerns, the research community has been actively exploring methods and techniques to develop facial privacy protection algorithms that can safeguard individuals against the potential risks associated with face recognition systems.
The goal of facial privacy protection algorithms is to find a balance between preserving an individual’s privacy and maintaining the usability of their facial images. While the primary objective is to protect individuals from unauthorized identification or tracking, it is equally important to ensure that the protected images retain visual fidelity and resemblance to the original faces so that the system cannot be tricked with a fake face.Â
Achieving this balance is challenging, particularly when using noise-based methods that overlay adversarial artifacts on the original face image. Several approaches have been proposed to generate unrestricted adversarial examples, with adversarial makeup-based methods being the most popular ones for their ability to embed adversarial modifications in a more natural manner. However, existing techniques suffer from limitations such as makeup artifacts, dependence on reference images, the need for retraining for each target identity, and a focus on impersonation rather than privacy preservation.
So, there is a need for a reliable method to protect facial privacy, but existing ones suffer from obvious shortcomings. How can we solve this? Time to meet CLIP2Protect.
CLIP2Protect is a novel approach for protecting user facial privacy on online platforms. It involves searching for adversarial latent codes in a low-dimensional manifold learned by a generative model. These latent codes can be used to generate high-quality face images that maintain a realistic face identity while deceiving black-box FR systems.Â
A key component of CLIP2Protect is using textual prompts to facilitate adversarial makeup transfer, allowing the traversal of the generative model’s latent manifold to find transferable adversarial latent codes. This technique effectively hides attack information within the desired makeup style without requiring large makeup datasets or retraining for different target identities. CLIP2Protect  also introduces an identity-preserving regularization technique to ensure the protected face images visually resemble the original faces.
To ensure the naturalness and fidelity of the protected images, the search for adversarial faces is constrained to stay close to the clean image manifold learned by the generative model. This restriction helps mitigate the generation of artifacts or unrealistic features that could be easily detected by human observers or automated systems. Additionally, CLIP2Protect  focuses on optimizing only the identity-preserving latent codes in the latent space, ensuring that the protected faces retain the human-perceived identity of the individual.
To introduce privacy-enhancing perturbations, CLIP2Protect  utilizes text prompts as guidance for generating makeup-like transformations. This approach offers greater flexibility to the user than reference image-based methods, as it allows for the specification of desired makeup styles and attributes through textual descriptions. By leveraging these textual prompts, the method can effectively embed privacy protection information in the makeup style without needing a large makeup dataset or retraining for different target identities.
Extensive experiments are conducted to evaluate the effectiveness of the CLIP2Protect  in face verification and identification scenarios. The results demonstrate its efficacy against black-box FR models and online commercial facial recognition APIs
Check out the Paper and Project Page. Don’t forget to join our 25k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
🚀 Check Out 100’s AI Tools in AI Tools Club
Ekrem Çetinkaya received his B.Sc. in 2018, and M.Sc. in 2019 from Ozyegin University, Istanbul, TĂĽrkiye. He wrote his M.Sc. thesis about image denoising using deep convolutional networks. He received his Ph.D. degree in 2023 from the University of Klagenfurt, Austria, with his dissertation titled “Video Coding Enhancements for HTTP Adaptive Streaming Using Machine Learning.” His research interests include deep learning, computer vision, video encoding, and multimedia networking.
Credit: Source link
Comments are closed.