Driven by data privacy and security concerns, the Internal Revenue Service made a definitive shift this year away from using a third-party service for facial recognition to help authenticate people creating online accounts.

And while fortified positions have softened somewhat in the subsequent weeks – taxpayers now have an expanded set of identity authentication options introduced by the IRS and ID.me – the evaluation process IRS is undertaking with facial recognition software is one that government agency decision makers, policymakers and other stakeholders now confront. But this evaluation should not be limited to data and privacy concerns alone; facial recognition software has introduced harmful equity and access biases that impede certain communities’ access to government benefits and taxpayer services.

The U.S. General Services Administration, for its part, affirmed its focus on accessibility with its recently released Equity Action Plan. A core strategy of the Plan to advance equity is in the area of emerging technologies and data sovereignty, where the “GSA commits that it will not deploy facial recognition, liveness detection, or any other emerging technology into production environments until a rigorous review has given the agency confidence that they can be implemented equitably and without causing disproportionate harm to vulnerable populations.”

Equity issues with facial recognition

In the case of identity verification to claim unemployment insurance benefits, facial recognition uses biometric data and official documents to verify the person applying for UI benefits is who they say they are. But facial recognition algorithms struggle to accurately identify individuals with darker skin tones, creating benefits distribution equity issues that can unfairly deny or delay payments based on race. There have also been complaints the software misidentifies people of color, gender-nonconforming people and women.

Concerns with the inequities of facial recognition accuracy are not new. A 2019 National Institute for Standards and Technology study found “...that the majority of facial recognition algorithms were far more likely to misidentify racial minorities than whites, with Asians, Blacks and Native Americans particularly at risk.”

Data matching solutions, which have also raised concerns, employ a different approach from facial recognition. They parse through disparate data stored in a data lake to use matching algorithms to validate information. Some rely on credit history-based questions such as type of car owned, previous known addresses, existence of credit and banking history (all requirements that may negatively impact communities of color, young, unbanked, immigrants, etc.) and serve as a further impediment to receiving benefits in a timely manner.

Access issues with facial recognition

Some facial recognition technologies require a person to upload their government ID—which not all populations have. While this does not mean they are ineligible for unemployment in certain states, it does make verifying their identities to gain access to unemployment benefits much more difficult.

Even more problematic is that facial recognition often requires smartphone ownership, or a webcam, to upload a selfie, alongside a government ID and copy of a utility bill. This is a serious impediment: 76% of U.S. adults earning less than $30,000 a year own a smartphone, which means that many of the people in the most need of assistance could face hurdles with this aspect of the process.

Facial recognition alternatives

The key question for agencies and policymakers when it comes to verifying identities for tax and benefits services is…what technology alternatives exist to not only protect citizens but also combat the benefits and refund fraud that has exploded during the pandemic?

Proven identity analytics technologies are out there, and any extended lapse between technological approaches opens up opportunities for bad actors to exploit the system. After all, the U.S. Department of Labor estimated at least 10% of the $872 billion issued in pandemic unemployment benefits as of September 2021 was likely done improperly.

A better approach involves drawing from data sources that do not carry the same type of inherent equity and access biases, such as digital devices, IP addresses, mobile phone numbers and email addresses.

A logical path forward for identity analytics providers that don’t rely on facial recognition will be to leverage financial data as a core data source. However, doing so requires a robust data rich approach that guards against bias risks. Agencies will need to stay attuned to how this approach would impact socioeconomic groups without a stronger financial history.

Finally, a goal for identity analytics going forward should recognize that facial recognition systems were in some ways built on the premise that the user is guilty…of not being who they say they are, or of accessing payments they are not eligible to receive. This approach to accessing government services where the user must, in effect, prove their identity and innocence, has merit. The downside however, is that it adds significant friction and frustration for those who have a valid right to the services. Ideally we will gravitate to a more balanced approach that addresses fraudulent activity while maintaining a frictionless experience for citizens who need services. By adopting a more holistic approach, stemming from unbiased data validation on the backend, agencies can strike this balance.

Government agencies have a dual imperative to protect citizens’ data privacy while ensuring unbiased, frictionless access to government services that Americans rely on. At the same time, the pandemic has accelerated the shift from in-person and other traditional touch points to digital government-citizen channels – a transformation that has also led to more opportunities for fraud. With the right approach, data-driven identity analytics can deliver desired benefits to agencies while minimizing risks to constituents.

Shaun Barry is Global Director for Fraud & Security Intelligence at data analytics provider SAS.

Have an opinion?

This article is an Op-Ed and as such, the opinions expressed are those of the authors. If you would like to respond, or have an editorial of your own you would like to submit, please email C4ISRNET Senior Managing Editor Cary O’Reilly.

Want more perspectives like this sent straight to you? Subscribe to get our Commentary & Opinion newsletter once a week.

Share:
In Other News
Load More