This Artificial Intelligence (AI) Approach can Spot Deepfake Videos of Famous People Using Facial, Gestural, and Vocal Mannerisms

Recent technological advances in artificial intelligence (AI) can be considered a double-edged sword. Although AI has benefited humanity in countless ways by making our lives much easier, whether by improving healthcare or by providing personalized and more interactive experiences, it also comes with its own drawbacks. One such adverse effect of AI is the surge in the number of deepfakes or synthetically generated media. Deepfakes (derived from a combination of “deep learning” and “fake”) are media generated by AI in which a person in an existing image or video is replaced with someone else’s likeness. This is done by utilizing robust machine learning techniques to produce audio and visual content that can easily deceive a general audience. Since their introduction a few years ago, deepfakes have greatly improved in quality, sophistication, and ease of generation. The most common deep learning-based techniques for producing deepfakes involve training generative neural network designs like autoencoders or generative adversarial networks (GANs).

Deepfakes have drawn much attention because of their potential for use in large-scale fraud, nonconsensual pornography, and defamation campaigns. It is getting harder to tell whether a video is real as technology becomes more advanced day by day. When we examine how deepfakes might be used as a weapon against global leaders during election seasons or during times of armed conflict, their usage becomes more dangerous. One such instance occurred recently when Russian parties produced a deepfake video that purported to show Volodymyr Zelenskyy, the president of Ukraine, saying things that he did not actually say. According to reports, the video was created to assist the Russian government in persuading its populace to believe state propaganda about the invasion of Ukraine.

To safeguard world leaders against deepfakes, researchers from the Johannes-Kepler-Gymnasium and the University of California, Berkeley, created an AI application that can determine whether a renowned person’s video clip is authentic or a deepfake. The researchers trained their AI system to distinguish specific persons’ distinctive body motions to determine whether or not a video was authentic, as described in their research paper published in Proceedings of the National Academy of Sciences.

Meet Hailo-8™: An AI Processor That Uses Computer Vision For Multi-Camera Multi-Person Re-Identification (Sponsored)

The pair sought an identity-based strategy in their newly developed AI system. They trained their system on several hours of real video footage to identify specific visual, gestural, and vocal traits that can distinguish a world leader from an impersonator or deep-fake imposter. The scientists also observed that people have several distinctive qualities aside from body markings or facial features, one of which is how they move. In Zelenskyy’s example, the Ukrainian president tends to elevate his left hand while arching his right brow. This kind of data was essential for programming the deep-learning AI system to study the subject’s physical motions by examining numerous recordings.

It was noted that the algorithm became more adept over time at identifying acts that humans are unlikely to notice. The pair evaluated their method by examining several deepfake videos alongside actual videos of various people. The final findings were very impressive, showing that their method was 100% successful in distinguishing between authentic and false videos. It was also successful in establishing the falsity of the Zelenskyy video. 

Although the team’s study is heavily focused on Zelenskyy, they stress that their methodology may be applied to analyze any high-profile figure for whom there is enough original video footage available. The researchers also stated that they do not plan to publicly release their classifier to impede counterattacks. Moreover, in an effort to combat deepfake-fueled misinformation, they have made their classifier available to credible news and government organizations.


Check out the Paper and Reference Article. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.


Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.


Credit: Source link

Comments are closed.