X
Innovation

How to spot a deepfake? One simple trick is all you need

If you're not convinced they who are they say they are, this weakness in deepfake software could help.
Written by Liam Tung, Contributing Writer
facial-recognition
Image: Getty Images/Prostock-Studio

With criminals beginning to use deepfake video technology to spoof an identity in live online job interviews, security researchers have highlighted one simple way to spot a deepfake: just ask the person to turn their face sideways on. 

The reason for this as a potential handy authentication check is that deepfake AI models, while good at recreating front-on views of a person's face, aren't good at doing side-on or profile views like the ones you might see in a mug shot. 

Metaphysics.ai highlights the instability of recreating full 90° profile views in live deepfake videos, making the side profile check a simple and effective authentication procedure for companies conducting video-based online job interviews. 

Deepfakes or synthetic AI-enabled recreations of audio, image and video content of humans has been on the radar as a potential identity threat for several years.

However, in June, the Federal Bureau of Investigations warned it had seen an uptick in scammers using deepfake audio and video when participating in online job interviews, which became more widely used in the pandemic. The FBI noted that tech vacancies were targeted by deepfake candidates because the roles would give the attacker access to corporate IT databases, private customer data, and proprietary information.  

The FBI warned that video participants could spot a deepfake when coughing, sneezing or other sounds don't line up with what's in the video. The side profile check could be a quick and easy-to-follow way for humans to check before beginning an online video meeting. 

Writing for Metaphsyics.ai, Martin Anderson details the company's experiments. Most of deepfakes it created failed obviously when the head reached 90° and revealed elements of the person's actual side profile. The profile view recreation fails because of a lack of good-quality training data about the profile, requiring the deepfake model to invent or "inpaint" a lot about what's missing. 

Part of the problem is that deepfake software needs to detect landmarks on a person's face to recreate a face. When turned side on, the algorithms only have half the landmarks available for detection compared to the front-on view. 

Anderson notes major weaknesses in using side profile video to recreate a face are the limits of 2D-based facial alignment algorithms and a plain lack of profile data for most people except Hollywood stars.  

Arguing the case for using the side profile for authentication in live video meetings, Anderson points out there will likely be a persistent shortage of side view training data for average people. There's little demand for stock photos of profile head shots because they're not flattering, and no motivation for photographers to supply them either since they offer little emotional insight into a face. 

"That paucity of available data makes it difficult to obtain a range of profile images on non-celebrities that's diverse and extensive enough to train a deepfake model to reproduce profile views convincingly," writes Anderson. 

"This weakness in deepfakes offers a potential way of uncovering 'simulated' correspondents in live video calls, recently classified as an emergent risk by the FBI: if you suspect that the person you're talking to might be a 'deepfake clone', you could ask them to turn sideways for more than a second or two, and see if you're still convinced by their appearance."

Sensity, a maker of liveness detection and deepfake detection software, in May reported that it found nine out of the 10 widely adopted biometric verification systems used in financial services for Know Your Customer (KYC) compliance were severely vulnerable to deepfake 'face swap' attacks. 

Commonly used liveness tests that involve a person looking into a camera on a connected device were also easily duped by deepfakes, it found. Liveness tests require the person to move their head left and right and smile. The deepfakes Sensity used involved the would-be fraudster moving their head left and right, but the video shows they stop turning their head before it reaches 90°. 

Sensity's CEO and Chief Scientist Giorgio Patrini confirmed to Metaphysic.ai they did not use full 90° profile views in their tests. 

"Lateral views of people faces, when used as a form of identity verification, may indeed provide some additional protection against deepfakes. As pointed out, the lack of widely available profile view data make the training of deepfake detector very challenging." 

Another useful way to rattle a live deepfake model is to ask the video participant to wave their hands in front of their face. It disrupts the model and reveals latency and quality issues with the superimposition over the deepfake face.           

Editorial standards