Here’s what we told the US Commission on Civil Rights about federal law enforcement’s unregulated use of facial recognition technology – and what must be done

Earlier this month, we appeared before the US Commission on Civil Rights – alongside experts from government, law enforcement, and other advocacy groups – to discuss the civil rights implications of federal law enforcement’s unregulated use of facial recognition technology (FRT).

The Commission, an independent and bipartisan body dating back to the Eisenhower administration, is right to focus its attention on FRT. It’s a tremendously powerful technology that presents serious risks to racial justice, privacy, and basic civil rights and liberties – especially when used by law enforcement without regulation. 

Despite these serious concerns, however, the public lacks even the most basic information about how and when police use this technology. From local departments to federal agencies like the FBI and the U.S. Marshall, law enforcement has by and large dodged any meaningful transparency. 

Take the FBI as an an example. As we told the Commission, the FBI has been using facial recognition technology for over a decade, but there is hardly any public information available about its use. We don’t know how often they run searches, for what types of crimes, on what demographics, or to what result. 

What little public information does exist about federal law enforcement use stems largely – sometimes exclusively – from investigative reporting or is scattered across federal auditor reports, not, as it should, from the agencies themselves.

Without this basic information, the public cannot properly assess whether there are actual public safety benefits to the technology, nor its potential biases or weaknesses.

At the same time, law enforcement continues to use the technology in ways for which it has not been tested – making transparency all the more important.

As the Office of Management and Budget recently made clear in groundbreaking guidance on federal agency use of AI, technology like FRT must be tested under real-world conditions before it’s deployed. Some facial recognition algorithm prototypes have been tested under laboratory conditions by the National Institute of Standards and Technology (NIST).But when it comes to police use of FRT, lab conditions are not the same as the real world. When police use FRT, they are often using it on low-quality surveillance images and with untrained human operators.

Despite the incredibly high stakes of law enforcement’s use of FRT (if it performs poorly, it can send innocent people to jail or multiply structural discrimination), the technology has not been independently tested under those real-world conditions. 

The importance of that testing is not just hypothetical: real-world testing can and in fact already has revealed serious weaknesses and biases in FRT systems. In 2020, for example, a scenario test sponsored by the Department of Homeland Security measuring the effectiveness of FRT on masked faces found that error rates were higher, on average, for Black individuals.

The status quo of nontransparent, untested use of facial recognition technology by law enforcement cannot continue. What’s needed, as our testimony makes clear, is democratically-approved rules and requirements – what we call “front-end accountability” – to govern how and when police can and cannot use facial recognition technology. That’s why we have developed legislative checklists that offer Congress and state lawmakers a clear and comprehensive vision of what that accountability can and should look like.

After years of inaction, there are finally signs that regulation and accountability may be coming at the federal level. The Commission’s hearing came on the heels of a flurry of federal interest in the topic. This included the recent guidance from the Office of Management and Budget that establishes guardrails for  how federal agencies use AI-powered technologies including FRT. It also includes recommendations that the National AI Advisory Committee, a federal commission advising the President on AI policy, has issued to strengthen transparency requirements for federal law enforcement around use of AI.

These are promising steps toward accountability, but they are not yet final. Meanwhile, police departments across the country continue using facial recognition technology with little to no transparency, oversight, or accountability.

Congressional action is absolutely essential – but until it comes, state and local lawmakers must do their part to ensure that law enforcement is not endangering the rights of constituents and community members today.

“If we keep kicking the can down the road,” we recently told Bloomberg Law, “we could end up in a place that feels more dystopian than you might think.”

Watch our testimony here. Read our full written testimony here.