AI Researchers Propose A Method Using Irregular Pupil Shapes To Identify GAN Generated Synthetic Faces
Computer-generated faces have lately improved to the point that they are difficult to tell apart from the real thing. That makes them a handy tool for online criminals who may use them to establish bogus profiles for suspicious social media accounts, for example.
As a result, computer scientists have been working on strategies to swiftly and efficiently detect these images. Now, a group of researchers from the State University of New York and their collaborators have devised a method for exposing false faces. They claimed that their eyes were their weakness.
Synthetic face generation is based on generative adversarial networks, which is a type of deep learning. The method entails feeding photos of real faces into a neural network and then asking it to create its own faces. These faces are then compared to a second neural network that attempts to detect fakes in order for the first network to learn from its mistakes.
The feedback between these “adversarial networks” improves the output swiftly to the point that the synthetic faces are difficult to distinguish from real ones. However, they are not without flaws. Generative adversarial networks, for example, have difficulty reliably replicating facial accessories like earrings and glasses, which are often different on either side of the face. However, the faces themselves appear genuine, making it difficult to distinguish them accurately.
According to a team of researchers from University at Albany, University at Buffalo, SUNY, and Keya Medical, Settle, USA, generative adversarial networks (GANs) do not develop faces with normal pupils, such as circular or elliptical ones, which allows them to be exposed. They also created software to extract the pupil shape from facial photos, which they then used to analyze 1000 images of genuine faces and 1000 images of synthetically made faces. The regularity of the pupils was used to grade each photograph.
The shape of real human pupils is elliptical. The effects of uneven pupil shapes, on the other hand, result in much lower ratings. This is due to the way GANs work, which have no inherent understanding of human facial shape. The lack of physiological restrictions in GAN models causes this problem. The researchers clearly show, in their research paper, a separation between the distribution of the BIoU scores of the real and GAN-generated faces, which can be used as a quantitative measurement to differentiate them.
A human can readily determine if a face is real or not using this clue. That’s an intriguing conclusion because it shows how to recognize a synthetic face quickly and easily if the pupils are visible. It would be simple task to write a program to accomplish the task.
However, this indicates a mechanism for bad operators to circumvent such a test. It’s a simple task for them to circularize the pupils in the synthetic faces they construct.
Paper: https://arxiv.org/pdf/2109.00162.pdf
Related Git: https://github.com/neu-eyecool/NIR-ISL2021
Suggested
Credit: Source link
Comments are closed.