The Ethical Implications and Legal Responsibilities of Biometric Data Security

Biometric Data Security

As part of Solutions Review’s Premium Content Series—a collection of contributed columns written by industry experts in maturing software categories— Ján Lunter of Innovatrics outlines the importance of maintaining ethics and the legal responsibility when working with biometric data.

Premium ContentBiometrics is rising as an innovative technology leading the way into the passwordless future. Password-only authentication is the biggest security problem in the world. Verizon’s 2022 Data Breach Investigations Report reveals that 80 percent of all data breaches are linked to stolen credentials.

Leading tech companies are turning to biometrics for its safety potential. On May 5, 2022, FIDO announced that Apple, Google, and Microsoft were expanding their support for passwordless standards created by the FIDO Alliance and the World Wide Web Consortium. The three top tech companies have embraced biometrics, signaling the inevitable massive adoption of the technology globally and its potential to replace traditional authentication methods.

However, like all data, biometrics has legal responsibilities, ethical implications, risks, and benefits.

The Passwordless Future


Legal Responsibilities for Companies Operating with Biometric Data

The debate over ethical biometric data began when the tech was deployed by government and security organizations. It has since then intensified, as private companies started using biometrics. The core of the debate is how biometric data is used and where it is stored, managed, and disclosed.

Protecting personal data is not only essential for business owners but also regulated by the government and under strict oversight in many parts of the world. For example, the European Union General Data Protection Regulation (GDPR) considers biometric data as sensitive data that requires the informed consent of the involved person. Not aligning with GDPR standards can result in legal consequences and fines, impacting a company’s reputation. In the U.S., several federal and state laws also apply to biometrics. The consequences of breaching these laws and requirements are as significant as breaching the GDPR.

In 2008, Illinois became the first U.S. state to enact biometric legislation with the Biometric Information Privacy Act (BIPA). Since then, more than 25 states have adopted some form of biometric law including: Texas, Washington, California, New York, Louisiana, Oregon, and Arkansas. These state laws regulate the collection, retention, disclosure, and destruction of biometric information and other purposes.

According to Biometric Update, 2022 could be a big year for biometric laws in the U.S., with two cases awaiting Supreme Court decisions. Cothron v. White Castle System, Inc., is one of the cases awaiting a Supreme Court decision. Cothron brought a proposed class-action lawsuit on behalf of all Illinois White Castle employees for a work policy that required employees to scan their fingerprints to access the restaurant’s computer system. They argued the policy was done without prior consent. This class suit is establishing that a company can face bankruptcy if held responsible for each instance. Another case now being heard is one being heard by a U.S. District Court judge in northern Illinois is David Karling et al. v. Samsara. Samsara argues that applying BIPA to truckers driving in or through Illinois would undermine federal regulation designed to ensure safety and verify drivers’ identity.

Other federal laws in the US are not specially drafted for biometric data, but they do set regulations, standards, and requirements for data management. Some U.S. Federal laws that govern data are:

  • The Health Insurance Portability and Accountability Act of 1996 HIPPA: Regulates the use of data in healthcare.
  • The Gramm-Leach-Bliley Act GLB: Oversees data management in the financial sector, including disclosure of customer information.
  • The Privacy Act of 1974: Governs the collection, maintenance, use, and dissemination of data of individuals in records by federal agencies.

Biometric Data Strengths Versus Passwords and Other Data

Biometrics encompasses the physical or behavioral characteristics of an individual coded into data. The data may range from the input of fingerprints, facial or voice patterns, retina scans, and even typing patterns. Biometrics has come a long way since its early development. It has proven to have several strengths and benefits over other types of data like passwords.

The strengths and benefits of biometrics are:

  • Unlike social security numbers, credit card data, or names, biometric data cannot be simply used for authentication over the internet because it is usually stored in templates. For many uses, it can be stored locally, avoiding the risk of being stolen when transferred.
  • Biometric data is not usually stored as image format. If they are, they should have strong protection.
  • Biometric templates are mathematical representations of significant facial features or minutiae points on the fingerprints. The templates cannot be used to reconstruct likeness backwardly.
  • Unlike username/password combo, which is easy to steal, guess, or generate by brute force, biometrics is resistant to such attacks.
  • Even if the perpetrator knows who has access and has a photo of an authorized user, a number of other failsafe measures, such as a liveness check, can prevent the hack.

How to Work Ethically with Biometric Data

Beyond legislation, a number of ethical concepts need to be considered by those working with biometric data. Companies and organizations must have an informed consent management system in place. This system must store the consent of each involved person and allow them to withdraw consent at any time. For example, suppose a company has biometric access control at a workplace. In that case, it must also be able to provide biometric-less access and delete a person’s likeness if they decide to withdraw consent.

Ideally, the best way to store biometric data is to keep only the templates on the server and throw away the image used to generate them. This protects an organization against private and personal data theft, as well as data leaks. If a company has to store personal biometric data, like, for example, mugshots, the access to the data has to be restricted and clearly defined. EU laws require the person who accesses sensitive personal data to be properly trained and authorized by the employer. Otherwise, the company risks a fine. The transfer of such data to third parties is also restricted and also requires consent.

In many cases, storing the biometric data centrally is not required for the technology to work. For example, a smart CCTV camera can extract faces from a video stream by itself and generate templates, sending only templates over the internet to the central server for identification. The same goes for identity verification for services provided via smartphone.

Smartphones are currently able to run neural networks efficiently and can make all the necessary comparisons (selfie vs. ID photo and liveness check) locally, sending only success or failure information to the central server and excluding any personal information. It’s always good to think about the use case, why the biometric data has to be obtained, how it will be used, and the proper ways to discard it after it’s no longer needed.

Bias, Discrimination, and Ethical Principles

In 2018, an MIT study found that facial recognition software was created using imagery datasets that contained 77 percent male and 83 percent white images. This caused the algorithm to fail in recognizing other groups of races and gender. Another study revealed that Amazon’s recognition tool suffered from the same problem. It was not 100 percent accurate and performed worst when it had to recognize women with darker skin.

In 2019, the National Institute of Standards and Technology NIST of the U.S. Department of Commerce took hands on the issue. They discovered the root of the problem: algorithms at the heart of biometric technology were defining bias and discrimination. They analyzed 189 algorithms from 99 developers, using 18.27 million images of 8.49 million people. Their findings revealed that poorly developed biometric technology has deficits when it comes to recognizing Asian, African American, and other native groups.

To avoid biased biometric technology, developers must train their algorithms and machine learning models using diverse and inclusive data sets.

Biometrics ethical concepts include:

  • Do not harm: Biometrics organizations must avoid actions that harm people or the environment.
  • Respect for personal data: When shared, stored, and processed, personal data must be respected and treated with care.
  • Justice and accountability: Biometrics should be open, transparent, and accountable.
  • Technology: Biometric technology should benchmark quality and include accuracy, error detection, and repair systems.
  • Human rights: Biometric development should align with human rights.
  • Equality: Biometric technology should not discriminate based on religion, age, gender, race, sexuality, or others.

Consumers, companies, and organizations all benefit significantly from biometrics. Today the technology has been embraced in a wide range of sectors, from your local gym to national ID or driving license, workforce management, banking, blood banks, healthcare, airports, and much more. As the world transitions to a passwordless future, biometrics organizations and consumers alike, are both responsible for developing and holding organizations accountable to legal and ethical standards.

Ján Lunter
Follow Ján