Digital identity verification technology provider Mitek, explores how the issue of biometric bias impacts the transgender and LGBTQ+ community and why facial recognition isn’t always the only answer.
Facial recognition software is a booming industry, with a market size that’s expected to grow from an estimated value of USD 3.8 billion in 2020 to USD 8.5 billion by 2025, according to ReportLinker. However, when it comes to those who identify as transgender or non-binary, human computer interfaces are almost never built with these communities in mind, As a result, identity verification characteristics of these demographics end up being excluded from AI data sets creating biometric bias which reinforces existing biases.
According to the National Transgender Discrimination Survey, only one-fifth (21%) of transgender people who have transitioned have been able to update all of their IDs and records with their new gender, while one-third (33%) have not updated any of their IDs or records. So, with so many individuals not even having access to formal documents that properly identify their gender, there is very little power to combat false identifications from algorithms.
If we continue at the same course and speed, software companies will, unintentionally or intentionally, support biases against these particular minority groups. To create a more fair and equal playing ground in how artificial intelligence (AI) technologies impact our lives, the first step is to educate ourselves on the topic so we can push for change and accountability.
Why is biometric bias so important?
Digital access is a daily requirement in everyone’s life. It enables financial transactions, online purchasing, education, healthcare, and even dating. Unfortunately, equitable access is falling short of every person being provided the same opportunity. In particular, the LGBTQ+ community is being negatively impacted by biometric bias when it comes to their own digital identities.
With almost 6% of U.S adults (18 million people) identifying as LGBTQ+, it’s important that the broader ecosystem of players pays attention and makes efforts to understand the challenges faced by this demographic. It will only be by understanding the roadblocks in accessibility that companies will be able to offer alternative digital solutions that are more inclusive of minority groups, which, collectively, make up a significant percentage of the overall population, i.e customer base.
Biometrics are strong lines of defense in proving and reliably authenticating a person’s identity but facial biometrics, in particular, may not be the right first layer of verification for the transgender community. An alternative method that is not gender-specific might be to use fingerprints, for example.
An individual’s biometric identifiers are essential elements of their digital identities, linked to documentation that helps people around the world conveniently access critical online services. Ranging from banking to education to healthcare, identity verification solutions are increasingly becoming ingrained into our daily lives.
At face value, the digital migration of services towards online seems both convenient, time-saving, and intuitive. However, not all demographics within the population experience such a seamless transition. For certain groups, biometric bias and inequality have created an additional barrier to accessing services, including essential services like access to healthcare.
Whether incorporated into ID verification tools intentionally or unintentionally, the lack of algorithmic inclusivity in technological design has generated digital exclusion. In a world where there is such dependance on digital access, this barrier has a significant impact on the livelihood of many people.
We all acknowledge, conceptually, what bias is, but we’re only now understanding that the biometric systems developed by tech companies and adopted across industries for ID verification have created a whole new level of unintentional discrimination. It might not be so evident at first because the typical person wouldn’t know the mechanisms that are going on behind the scenes as they try to access financial services.
Is facial recognition an ineffective biometric for LGBTQ+?
Automated identity verification requires individuals to prove they are who they say they are in these online scenarios. In many situations, a government issued identity document (IDV) is required as proof. Technology used to match the applicant to the data on file often uses biometric facial recognition to prove that the person applying for access is not a robot or fraudster.
This type of recognition completely shuts down users’ options to self-identify. Instead, it determines the user’s gender simply by scanning their face and assigning the identity of male or female based on previous data analyzed. Superficial features such as the amount of makeup on the face, or the shape of the jawline and cheekbones will put your gender into a binary category. As a result, these systems are unable to properly identify non-binary and trans people.
Several researchers have shown how ineffective facial recognition technology is in recognizing non-binary and transgender people, including Os Keyes’ paper on automatic gender recognition (AGR) systems. The reason for this is because most facial recognition algorithms are trained on data sets designed to sort individuals into two groups – most often male or female.
Consequently, biometric facial recognition technologies cannot recognize minority subgroups based on their gender expression because they were never provided with information to properly identify across the spectrum. At the same time, user interfaces that allow people to add their gender information also lack the necessary selection of gender type options.
It’s important to note that biometrics themself are not actually biased, as they are not making any decisions based on human values. Biometric bias and inequality is caused by a lack of diverse demographic data, bugs, and inconsistencies in the algorithms. For example, if the training data primarily includes information related to just one demographic, the learning models will disproportionately focus on the characteristics of that demographic.
The inability to identify people within these groups has consequences in the real world. The lack of accuracy of these technologies can lead to people being mistreated, from not being able to get approved for financial products and services to facing issues with the government or police due to misidentification. People who aren’t represented lose the ability to be acknowledged and fight for their freedoms and rights.
In fact, some systems, particularly those used by police departments across the United States, have been shown to have a 96 percent error rate when used in practice. And even when gender is properly identified, the system still assumes that the person falls into a binary category. We must be open to continually listening to the needs and challenges faced by all demographics as we develop products. This will help ensure that fairness, inclusion, and accessibility are properly represented and incorporated into our technology.
The bottom line
To get to a state of equal access and inclusion, financial institutions can leverage the advancements of multi-modal biometrics that offer more robust authentication factors beyond facial recognition.
To build equitable access for all communities, we need to think through which biometrics are the most appropriate when requiring people to prove their identity. It will mostly likely be a combination of biometrics presented in stages depending on the level of risk. Question whether these types of systems are even necessary to begin with and what the design process looks like to ensure equal access and inclusivity. We need to understand our unconscious biases, expand the current framework we’re used to following, and change our habits of using the same types of data sets if we want to create algorithms that are actually representative and inclusive of our customer audiences.
Though technology cannot fully solve biases against LQBTQ+ groups, there are opportunities to create new technologies that may help address some of this discrimination. The key is to ensure that people within these communities are involved in the design process. For example, researchers from the Universities of Michigan and Illinois conducted design sessions with 21 members of the trans community. They detailed four types of trans-inclusive technologies: Technologies for changing bodies, technologies for changing appearances and/or gender expressions, technologies for safety, and technologies for finding resources.
Codified bias has the ability to discriminate and limit access to even the most basic essentials for the LGBTQ+ community. But with a lack of government regulation, biometric technologies have no legal accountability to address issues with accessibility to transgender individuals and anyone else within a minority sub-group. These vulnerable groups are at the mercy of corporate organizations who may not yet fully understand how their products are being used in practice and how their digital transformation journey impacts consumers.
As we strive for equality, safety, and fairness for everyone, it’s crucial that we involve our customers and target audiences in our R&D process and authentically leverage the learnings to develop our machine learning systems to be universally representative, and reduce the potential for biometric bias. Mitek CMO, Cindy White emphasizes that “organizations have a responsibility to recognize bias in their technologies, and work to adapt models to acknowledge the differences that make us who we are. This could involve diversifying the types of biometric technologies that are used to identify users, retraining systems which are misgendering people, changing the way systems classify by gender and, most importantly, listening to customers.”
Without customer driven innovation, some biometric technologies have the potential to exacerbate existing inequalities and make daily life challenging and unfair for LGBTQ+ individuals. Also, without proper privacy protections in place, data breaches that target facial recognition data may become far more likely. In the wrong hands, a person’s previously undisclosed sexual orientation or gender identity can become a tool for discrimination, harassment, or harm to their life or livelihood.
_______________________________________________________________________
Mitek launching MiPass with advanced biometric authentication
Mitek’s MiPass solution uses a sophisticated combination of biometrics that are extremely difficult to falsify – face recognition, voice verification, and liveness detection — to strengthen trust in your customers’ real-world identities. MiPass can help you increase security and customer experience by replacing passwords and one-time passcodes with biometric authentication that you can easily embed into your existing platforms and customer workflows. Find out more today.
_______________________________________________________________________
Join the Future Identity Festival, taking place on the 11th – 12th November 2024 at The Brewery, London.
Whether your concerns are security, inclusion, privacy, inter-operability or offering customer seamless experiences, the festival will explore the key trends and technologies shaping the future of identity verification, risk management, fraud prevention and more.