Ad Image

Safeguarding Your Likeness from the Rise of Deepfakes

deepfakes

deepfakes

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. David Divitt of Veriff goes deep into safeguarding your likeness from deepfakes, as the technology becomes more easily accessible.

Identity theft is a staple of fraud actors’ tactics, including the use of fake IDs and impersonating people online after phishing for their identifying information. However the fraudster intends to use the newly assumed identity, the outcome for victims of the scams is usually the same: a complete upending of their personal life, reputational harm, or even financial ruin. Today’s average internet user is likely familiar with the traps many scammers use to gain access to personal information. But what if all they needed was an image of your face, or a sound bite of your voice?

When it comes to identity fraud, there’s little as powerful as a deepfake. Deepfakes have been around for several years (the term was coined in 2017), but with such significant progress being made in the development of artificial intelligence (AI) solutions every day, this form of fraud has increased in sophistication. Today, deepfake fraud rates continue to rise rapidly, and with the emergence of deepfakes-as-a-service, they’ll also become more scalable.

Let’s explore the current state of deepfake fraud, and what businesses — and the average person — can do to protect their identities from malicious uses of AI.

Safeguarding Your Likeness from the Rise of Deepfakes


The Basics of Deepfake Tech

The most common method of deepfake creation centers around deep learning, a subset of AI and machine learning (ML) that uses layered neural networks to extract more and more precise details from the data it’s fed, whether that’s an image, audio or video clip, or even a sample of text. Such sophisticated tech was previously out of reach to small-time fraudsters, limiting them to more rudimentary tactics – but now that AI has made its way into mainstream accessibility, it’s powering a concerning surge in fraud.

Many of the deepfakes the average person will encounter in their everyday life will be in the form of a video, likely a doctored clip of a victim appearing to do or say something they didn’t (whether by superimposing their features over someone else, or by “puppeteering” their image directly). This dangerous technology can enable fraud actors to create fabricated recordings, or even impersonate a person directly – and are often used to bypass authentication processes that involve facial recognition or liveness detection, as well as producing falsified blackmail material or fake endorsements. It’s likely that you have seen some on the internet already without realizing it.

While this would normally require an immense amount of data around the target’s likeness to generate a credible result, the alarmingly-fast progression of deepfake technology is already simplifying the process. Generative Adversarial Networks (GANs), or autoencoders, set up opposing neural networks: one (a “discriminator”) trained to sniff out fake images, and one (a “generator”) trained to create fake images that will fool the discriminator. The two learn from each other, with the discriminator becoming more adept at identifying fakes and the generator creating more convincing results to sneak them by the discriminator. The result? Fraudsters with even a single image of a target (and a well-trained GAN) can create a frighteningly plausible deepfake that’s been refined to be as undetectable as possible.

Image-based deepfakes are the most abundant, but they’re not the only forms deepfakes can take. Just as AI can learn and recreate the intricacies of a human face, it can do the same with a person’s voice. Given enough recordings of a target’s speech, certain AI models can produce an uncannily accurate replication of their unique voice, whether through text-to-speech or even overlaid over a criminal’s own voice. Fraud actors can leverage this for very sinister applications, such as the aforementioned blackmail material or masquerading as a victim over the phone, deceiving the recipient of the call into believing they’re speaking to someone familiar rather than a scammer.

Examples of Deepfake Uses

Picture this: You’re at work and you get a video or phone call from an executive at your company (or so it seems). They have a quick question about some sensitive information you’re both privy to, or they want you to give them access to something they’re having trouble with, or even have a task for you to do, like switching the recipient of an outgoing payment. Though many wouldn’t question whether the familiar colleague they’re speaking with is an imposter or not, the possibility exists that you’ve just unwittingly carried out the bidding of a criminal, or allowed them to walk right past your company’s security measures unopposed. “CEO fraud” isn’t a new phenomenon, but deepfakes make it significantly harder for the average person to catch.

Another long-standing form of fraud enhanced by deepfake tech is catfishing, or romance fraud, in which unsuspecting individuals are victimized, by being seduced online and swindled for money or blackmail material. Scammers have catfished their victims for years, but with the ability to manipulate their outward image to an imperceptible degree, they can now blend in seamlessly with legitimate (unverified) users and produce all the “proof” they need to convince their target that they’re real.

With the increasing likelihood that a user or employee being onboarded is passing themself off as someone else, it’s more crucial than ever to secure onboarding and user authentication processes with specialized tools designed and trained to weed out the fraudsters behind these assumed identities– and to maintain a healthy level of wariness towards people exhibiting suspicious behavior or making suspicious requests.

How Can We Catch Deepfakes?

Based on the examples above, it’s easy to see the current state of AI as terrifying, but it can be and is used for good, such as to defend against those exact threats. Just as deep learning can falsify minute details undetectable to the average person, AI-powered identity verification processes (like facial scans and liveness detection tests) can detect the miniscule but telltale indicators of a deepfake that are imperceptible to the human eye, and identify when an image they’re presented with has been manipulated. AI can also be employed to identify patterns indicative of fraudulent behavior across multiple sessions and users, and look for further hints of fraud activity within attributes of the device being used in the check.

However, as deepfakes become more and more sophisticated, these measures may yet prove insufficient against certain forms that will require even more data points to detect. With fraud technology regularly outpacing the systems designed to catch it, a key way to ensure your and your business’ protection is to use multiple forms of identity verification for a “belt and braces” security approach. Never assume that one line of defense is enough. Additional checks, such as requiring a scan of a government-issued ID, can deter less-advanced criminals.

Whether proprietary or courtesy of a third-party vendor, the workings of these tools are typically hidden from the user entirely, making them all the more difficult for fraud actors to reverse-engineer. Working with third-parties on fraud prevention, rather than a DIY solution, can also come with its own advantages. They are typically far better equipped to keep up with rapidly evolving fraud trends, technology and patterns. Many vendors have teams of fraud experts constantly reviewing data from their products’ deployments to identify new trends in the threat landscape, and can often work directly with their customers to create new, tailored solutions and checks to fit their needs.

Finally, the most basic but effective way to avoid falling victim to deepfake fraud is to simply be careful. Phishing and other scams usually have telltale hints beyond anything suspicious about the images or audio they use. Sometimes a user’s display name doesn’t match their username or email address, indicating a fraudulent account adopting their persona. Sometimes, such as in business email compromise (BEC) cases, users posing as someone else will say or ask for something that the real “them” would never. Deepfakes may be able to fake a likeness or a voice, but they can’t fake common sense– so use yours.

Final Thoughts

It’s crucial to stay as vigilant as ever about protecting sensitive, identifying information that can compromise your identity. It’s just as crucial to keep yourself familiar with the kinds of threats currently being used and how those threats are evolving. Understanding how deepfakes are created, as well as how they can be weaponized, is the first step to understanding how to defend against them. But for any business hoping to stay safe, every step after that is to keep pace with deepfakes’ advancement. With each new AI breakthrough, the fakes get more convincing.

Share This

Related Posts