Intel AI Introduces Deepfake Detector that Analyzes ‘Blood Flow’ in Video Pixels to Return Results in Milliseconds with 96% Accuracy
Deepfakes, the art of swapping one face with another, is probably fun. Technology has been getting better day by day, and so are deepfakes and the bigger issue is how exactly real deepfakes start looking. Deepfakes are entertaining, but impersonation can spread misinformation, and so do we have many apps where some of them make fakes while others are on a spree to find them.
FakeCatcher, a new real-time deepfake detector from Intel, has a 96% accuracy rate and can determine whether a video is fake or not. It is “the first real-time deepfake detector in the world that provides results in milliseconds,” according to Intel’s Responsible AI research. Most of the detectors work by looking at raw data, whereas FakeCatcher looks for authentic clues in the real videos, like blood flow in the pixels of the video, that signifies us to be humans. Spatiotemporal maps are created by translating these signals, which are gathered from all areas of the face. The researchers can then determine if a video is authentic or fake using deep learning. This technology works on a web-based platform that parallelly runs up to 72 different streams on 3rd Gen Intel Xeon scalable processors.
Demir and Umur Ciftci of the State University of New York at Binghamton designed Intel’s FakeCatcher. This application uses Intel software and hardware and runs on a server and interface through a web-based platform. The FakeCatcher team has used OpenVino to simulate and build AI models for landmark and face detection algorithms. Computer Vision blocks have been optimized using OpenCV and Intel Integrated Performance Primitives. Inference blocks have been optimized using Intel Advanced Vector Extensions 512 and Intel Deep Learning Boost, while media blocks have been optimized using Intel Advanced Vector Extensions 2. Teams also used the Open Visual Cloud initiative to create an integrated software stack for the Intel Xeon Scalable processor series.
The majority of deep learning-based detectors scan raw data for indications of inauthenticity and identify the flaws in a video. FakeCatcher, in contrast, analyses what makes us human in order to find genuine clues in actual footage, like our veins, which change color when our hearts pump blood. Blood flow signals are gathered across the face, and computers turn these into spatiotemporal maps. After that, we can quickly determine whether a video is authentic or fake using deep learning.
Deepfake films pose an increasing danger. It is a tough task to detect these videos in real-time detection apps, as uploading and analyzing the videos usually takes hours. Deepfake deception can damage the firm’s reputation and have negative impacts, such as lowering the public’s trust in the media and by allowing users to discriminate between real and fake content, FakeCatcher promotes trust. According to Gartner, businesses will invest up to $188 billion in cybersecurity solutions.
For FakeCatcher, there are numerous potential applications. International news companies could use the detector to prevent unintentionally boosting falsified videos. Social media sites might use the technology to stop individuals from posting damaging deepfake videos. Additionally, charitable groups might use the platform to democratize deepfake detection for all users.
Check out the reference article from Intel. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.
Avanthy Yeluri is a Dual Degree student at IIT Kharagpur. She has a strong interest in Data Science because of its numerous applications across a variety of industries, as well as its cutting-edge technological advancements and how they are employed in daily life.
Credit: Source link
Comments are closed.