Ad Image

AI in Biometric Digital Identity Fraud

digital identity

digital identity

Solutions Review’s Expert Insights Series is a collection of contributed articles written by industry experts in enterprise software categories. Ricardo Amper of Incode examines how AI can be used both maliciously and preventively when it comes to biometric digital identity fraud.

Expert Insights badgeAI technology is showing consumers how advanced AI programs can mimic human dialogue, generate images, and perform various other tasks. ChatGPT responds to questions in a manner very near to human conversation. Similarly, Dall-E creates realistic images based on text descriptions alone. While these advanced models demonstrate the benefits of AI, they also raise concerns about malicious activities. For example, AI models can be used to carry out identity fraud, where individuals or organizations impersonate others to deceive or steal information. What started in 2019 with the first report of an AI-based deepfake conducting voice fraud of a CEO’s impersonation to release almost $250K has now developed into more complex and realistic AI video fraud. Ethan Mollick, a Wharton School professor, found it only took him 8 minutes and a computer to create a deepfake of himself giving a lecture. AI can be programmed to mimic real people to steal personal data, or AI-generated images can be used to create fake identities or manipulate information.


Widget not in any sidebars

Digital Identity Fraud: AI in Biometrics


Malicious AI Attacks in Biometric Identity Verification

This malicious activity extends into advanced digital identity-proofing systems, such as biometric verification. Even when businesses use biometric security measures to confirm that the credentials presented to their system belong to an actual, live individual and not a counterfeit, their measures can still be bypassed. For instance, if facial recognition requires a live selfie, a criminal might try to present a still image or recording instead. The same goes for fingerprint identification, where a fake cast could be used. Other biometric security checks, such as retina scans and voice recognition, are vulnerable to similar tactics.

Threat actors can specifically use AI to bypass biometric measures, known as 2D and 3D dynamic attacks. For example, as some facial recognition systems ask users to perform specific movements (such as blinking) through 2D dynamic attacks, hackers impersonate real people by displaying a sequence of 2D images and playing them back as a video. 3D Dynamic attacks use animated 3D images to impersonate users. AI can create 3D avatars that mimic a real face from multiple perspectives or create deepfake “puppets,” which are artificial simulations of a person’s video created by filming an actor or using an animated model, then superimposing an animated face over the original footage. This animated face can be manipulated digitally, like a puppet. Artificial intelligence is used to refine deepfakes and make them appear more realistic.

Liveness Detection

While this may seem alarming, AI has also been instrumental in enhancing traditional identity authentication methods, including biometric authentication. 3D attacks and deepfakes can be challenging to detect as they often closely resemble real life to the naked eye. But, to identify these presentation attacks, AI software can detect anomalies, such as unnatural blinking patterns, unusual eye pupil movements, or poor lip synchronization. Liveness detection is a digital security method that can determine whether presented credentials for passing a biometric identification check represent a living person rather than a high-tech fake. Passive Liveness detection, as opposed to active liveness detection, specifically leverages AI to detect complex mathematical patterns and patterns in data and can help to determine if a user is a real person rather than a pre-recorded video or a photo.

AI-powered facial recognition algorithms can extract specific data points from a photo, such as the position of the eyes, nose, and mouth, and compare them to other pictures in a database. This helps the AI to detect signs of spoofing or falsification and weigh the results based on a scoring system. Additionally, AI algorithms can review photos for signs of involuntary eye movement or check photo light patterns to determine whether an image is three-dimensional. Moreover, AI algorithms can be trained to detect and prevent these dynamic attacks. The AI models can learn from past episodes and understand what features to look for in biometric data to determine whether it is authentic. Additionally, organizations can use AI-powered biometric scanning software to detect differences in biometric traits, such as facial expressions, and compare them against their database to identify discrepancies.

Looking Ahead

As an extra line of defense against attacks, organizations should consider implementing multi-factor authentication methods, such as biometric, behavioral, and knowledge-based methods. They can use a combination of facial and voice recognition or facial recognition and fingerprint scanning. This can reduce the chances of a successful attack and increase users’ security levels. While AI is contributing to malicious activity and spoofing in digital identity management, it is also championing new identity authentication methods, such as passive liveness detection, making it harder for criminals to operate using phony credentials.


Widget not in any sidebars

Share This

Related Posts