Meet GANonymization: A Novel Face Anonymization Framework With Facial Expression-Preserving Abilities

In recent years, with the exponential growth in the availability of personal data and the rapid advancement of technology, concerns regarding privacy and security have been amplified. As a result, data anonymization has become more important because it plays a crucial role in protecting people’s privacy and preventing accidental sharing of sensitive information.

Data anonymization methods like generalization, suppression, randomization, and perturbation are commonly used to protect privacy while sharing and analyzing data. However, these methods have weaknesses. Generalization can cause information loss and reduced accuracy, suppression may result in incomplete data sets, randomization techniques can leave room for re-identification attacks, and perturbation can introduce noise that impacts data quality. Striking a balance between privacy and data utility is crucial when implementing these methods to overcome their limitations effectively.

Acquiring and sharing sensitive face data can be particularly difficult, especially when making datasets publicly available. However, there are promising opportunities in using facial data for tasks such as emotion recognition. To address these challenges, a research team from Germany proposed a novel approach to face anonymization that focuses on emotion recognition.

🚀 JOIN the fastest ML Subreddit Community

The authors introduce GANonymization, a novel face anonymization framework that preserves facial expressions. The framework utilizes a generative adversarial network (GAN) to synthesize an anonymized version of a face based on a high-level representation.

The GANonymization framework consists of four components: face extraction, face segmentation, facial landmarks extraction, and re-synthesis. In the face extraction step, the RetinaFace framework detects and extracts visible faces. The faces are then aligned and resized to meet the requirements of the GAN. Face segmentation is performed to remove the background and focus solely on the face. Facial landmarks are extracted using a media-pipe face-mesh model, providing an abstract representation of the facial shape. These landmarks are projected onto a 2D image. Finally, a pix2pix GAN architecture is employed for re-synthesis, using landmark/image pairs from the CelebA dataset as training data. The GAN generates realistic face images based on landmark representations, ensuring the preservation of facial expressions while removing irrelevant traits.

To evaluate the effectiveness of the proposed approach, the research team conducted a comprehensive experimental investigation. The evaluation encompassed multiple aspects, including assessing the anonymization performance, considering the preservation of emotional expressions, and examining the impact of training an emotion recognition model. They compared the approach with DeepPrivacy2 regarding anonymization performance using the WIDER dataset. They also assessed the preservation of emotional expressions using AffectNet, CK+, and FACES datasets. The proposed approach outperformed DeepPrivacy2 in preserving emotional expressions across the datasets, as demonstrated through inference and training scenarios. The experimental investigation provided evidence of the effectiveness of the proposed approach in terms of anonymization performance and preservation of emotional expressions. In both aspects, the findings demonstrated superiority over the compared method, DeepPrivacy2. These results contribute to understanding and advancing face anonymization techniques, particularly in maintaining emotional information while ensuring privacy protection.

In conclusion, we presented in this article a new approach, GANonymization, a novel face anonymization framework that utilizes a generative adversarial network (GAN) to preserve facial expressions while removing identifying traits. The comprehensive experimental investigation demonstrated the approach’s effectiveness in terms of anonymization performance and preservation of emotional expressions. In both aspects, the proposed approach outperformed DeepPrivacy2, a comparative method, indicating its superiority. These findings contribute to advancing face anonymization techniques and highlight the potential for maintaining emotional information while ensuring privacy protection.


Check out the Paper and Github link. Don’t forget to join our 22k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Mahmoud is a PhD researcher in machine learning. He also holds a
bachelor’s degree in physical science and a master’s degree in
telecommunications and networking systems. His current areas of
research concern computer vision, stock market prediction and deep
learning. He produced several scientific articles about person re-
identification and the study of the robustness and stability of deep
networks.


➡️ Ultimate Guide to Data Labeling in Machine Learning

Credit: Source link

Comments are closed.