Intel has unveiled FakeCatcher, a real-time deepfake detector that delivers 96% accuracy in milliseconds.
Deepfakes are AI-generated images and videos designed to make it look as if someone is saying or doing something they are not. The technology is so advanced that it can be almost impossible to readily identify them. As a result, the potential implications are staggering. The technology can be used to discredit public figures, influence elections, ruin business leaders, create revenge porn, and far more.
“Deepfake videos are everywhere now,” says Ilke Demir, senior staff research scientist in Intel Labs. “You have probably already seen them; videos of celebrities doing or saying things they never actually did.”
FakeCatcher was “designed by Demir in collaboration with Umur Ciftci from the State University of New York at Binghamton.” The solution relies heavily on AI, as well as algorithms to detect faces and landmarks
Most deep learning-based detectors look at raw data to try to find signs of inauthenticity and identify what is wrong with a video. In contrast, FakeCatcher looks for authentic clues in real videos, by assessing what makes us human— subtle “blood flow” in the pixels of a video. When our hearts pump blood, our veins change color. These blood flow signals are collected from all over the face and algorithms translate these signals into spatiotemporal maps. Then, using deep learning, we can instantly detect whether a video is real or fake.
Intel’s FakeCatcher is an important tool in the fight against deepfakes, and will hopefully help debunk them and mitigate the damage they can do.