Researchers From LinkedIn And UC Berkeley Propose A New Method To Detect AI-Generated Profile Photos
The sophistication of false profiles has increased alongside the proliferation of artificial intelligence (AI)-produced synthetic and text-to-image generated media. LinkedIn partnered with UC Berkeley to study cutting-edge detection methods. Their recent detection method accurately identifies artificially generated profile pictures 99.6% of the time while misidentifying genuine pictures as fake only 1%.
Two types of forensic methods can be used to investigate this issue.
- Methods based on hypotheses can spot oddities in synthetically made faces. The methods benefit by learning blatant semantic outliers. The problem, however, is that learning-capable synthesis engines already seem to have these features.
- Data-driven methods like machine learning can tell natural faces apart from CGI ones. When presented with images outside of its region of expertise, it is not uncommon for a trained system to struggle with classification.
The proposed work adopts a hybrid approach, first identifying a unique geometric attribute in computer-generated faces and then employing data-driven methods to measure and detect it. This method uses a lightweight, quickly trainable classifier and requires training on a small set of synthetic faces. Five distinct synthesis engines are used to build 41,500 synthetic faces and they use 100,000 real LinkedIn profile pictures as additional data.
To see how actual (publicly available) LinkedIn profile pictures stack up against synthetically generated (StyleGAN2) faces, they took an average of 400 each and put them side by side. Since people’s actual photos are so different from one another, most profile pictures are just generic headshots. In comparison, the typical StyleGAN face has very clear features and sharp eyes. This is because the ocular location and interocular distance of StyleGAN faces are standardized. Real profile pictures typically focus on the upper body and shoulders, whereas StyleGAN faces are generally synthesized from the neck up. They wanted to make use of the similarities and differences that exist within and between social groups.
To identify deepfake face swaps in the FaceForensics++ dataset, the researchers combine a one-class variational autoencoder (VAE) with a baseline one-class autoencoder. In contrast to earlier work focusing on face-swap deepfakes, this work emphasizes synthetic faces (e.g., StyleGAN). The researchers also use a considerably simpler and easier-to-train classifier on a relatively small number of synthetic images while achieving comparable overall classification performance.
Using images generated with Generated.photos and Stable Diffusion, they evaluate the models’ generalization ability. Generated.photos faces, generated using a generative adversarial network (GAN), are relatively generalizable using their method, whereas stable diffusion faces are not.
TPR stands for “true positive rate” and measures how successfully fake images are identified as such. To calculate the FPR, take the number of genuine images wrongly labeled as fake. The findings show that the proposed method accurately identifies only 1% (FPR) of authentic LinkedIn profile pictures as fake while correctly identifying 99.6% (TPR) of synthetic StyleGAN, StyleGAN2, and StyleGAN3 faces.
They also evaluate the method against a state-of-the-art convolutional neural network (CNN) model used for forensic picture classification and find that their methods perform better.
According to the team, their method can be easily compromised by a cropping attack, which is a major disadvantage. StyleGAN-generated images are already closely cropped around the face, so this attack might lead to unusual profile pictures. They plan to use advanced techniques and may be able to learn scale and translation invariant representations.
Check Out The Paper and Reference Article. Don’t forget to join our 25k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
🚀 Check Out 100’s AI Tools in AI Tools Club
Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.
Credit: Source link
Comments are closed.