What does the new White House policy on AI mean for law enforcement? Here are our takeaways.

Last month, the White House Office of Management and Budget (OMB) issued a landmark policy putting in place long-overdue requirements on how federal agencies can – and cannot – use artificial intelligence. 

The policy, which establishes a strong baseline for responsible AI procurement and use across all agencies of the federal government, is significant for a number of reasons, including for what it doesn’t do: exempt law enforcement.

Too often, law enforcement agencies are given special exemptions from ambitious government accountability efforts – take the Privacy Act of 1974, for example. But OMB’s new requirements are just as binding on federal law enforcement agencies, from the FBI to Customs and Border Patrol, as on any other.

The policy is also an improvement from a draft version, which was released for public comment late last year. The final policy includes a number of stronger requirements, including several that we called for in a New York Times op-ed with Dr. Joy Buolamwini, founder of the Algorithmic Justice League.

Even still, the new policy is far from perfect. But it’s a meaningful improvement from the status quo, which has allowed federal agencies like the FBI to use facial recognition and other AI-driven technology under near total secrecy, with no public guardrails in place. Ultimately, it should serve as a baseline for AI accountability efforts to come.

Here are some key takeaways on what the guidance means for law enforcement: 


What we’re excited about


Improved transparency requirements that apply to law enforcement

The policy contains a number of provisions directing federal law enforcement agencies to be more transparent about when and how they use AI technology. 

This is a big deal. For years, the FBI and other law enforcement agencies have been using facial recognition and other AI-powered technologies while disclosing hardly any information about it to the public. Much of what we know has come from investigative reporting or statements from the technology companies, rather than the actual agencies. 

OMB’s new policy, however, specifically includes expanded reporting requirements for rights- and safety-impacting technologies  – a category that includes many tools used by law enforcement, from license plate readers to predictive policing. The policy also includes updated guidance on the federal government’s AI use case inventory that appears to close a loophole – one that the Department of Justice has exploited for years to obscure the FBI’s use of technologies like facial recognition technology by calling it “sensitive” information.

Lastly, if an agency now seeks to waive compliance with any requirements for rights- or safety-impacting AI– it must provide a public justification for doing so. While not perfect (more on this below), that’s an improvement from the policy’s draft version, which allowed agencies to waive these obligations without providing any public notice or explanation at all. 


New requirements – including independent testing – for use of AI technology

In addition to improving transparency over what AI tools agencies use, the new policy also shapes how federal agencies can actually use these tools. By December of this year, any federal agency seeking to use “rights-impacting” or “safety-impacting” technologies will be required to comply with a number of best practices and common sense measures when doing so.

For example, before agencies can use any rights- or safety-impacting AI, including policing technology like facial recognition and predictive policing,  they must complete an impact assessment that includes a comprehensive cost-benefit analysis. And if the benefits don’t meaningfully outweigh the costs, agencies simply can’t use it. They’re also now required to engage with underserved communities around each technology use, including by soliciting public feedback and in turn using that feedback to shape decisions. 

Notably, agencies must also conduct independent testing of the AI under real-world conditions. This requirement in particular is crucial. We cannot know whether or not these tools work (or how biased they are) without this kind of testing. And as far as we – the general public – knows, it simply hasn’t been done. 



New standards on procurement, including for biometric tools used by law enforcement

Vendors seeking to contract with federal agencies will now have to comply with a set of transparency and performance requirements, including providing adequate information and documentation about the data they used to train their AI systems. 

This, notably, includes an additional requirement for agencies procuring “biometric systems,” like facial recognition technology, to specifically assess the risks that vendors’ training data “embeds unwanted bias” or was “collected without appropriate consent.”

Requirements like these should help mitigate risks of training data bias that have baked racial bias into tools frequently used by law enforcement, including facial recognition and predictive policing. They should also serve as a warning to companies that harvest our biometric data without our consent to sell to law enforcement.  




What we’re concerned about


Too much discretion by agencies to waive requirements for rights- and safety-impacing AI

Agencies using rights- or safety-impacting technologies still have too much leeway to waive the new requirements on how they can use those tools. 

While it’s a step in the right direction that agencies now have to provide a public justification as to why they are waiving those requirements – and a notable improvement from the draft guidance – they still simply have too much discretion to unilaterally grant themselves a waiver. Law enforcement agencies should instead, as we noted in the New York Times, “be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights.”



Broad exemptions for national security and the intelligence community 

Although the new policy notably doesn’t contain explicit exemptions for law enforcement, it does contain broad exemptions for national security and the “intelligence community.” 

The Department of Defense and the intelligence community, for example, do not have to make public any information about their use of AI in the AI use case inventory, nor does the intelligence community have to comply with the mandatory guardrails established for rights- and safety-impacting AI.

Those exemptions present several issues. For one, ordinary law enforcement may seek to use them to hide their own activities, as they often do with exceptions and tools meant to cover national security and terrorism. But the policy is clear: these exemptions do not and should not apply to law enforcement – they are meant instead to specifically cover agencies like the CIA, NSA, and offices at DHS and the Treasury. 



No effect on state and local law enforcement

Lastly, the policy affects all federal law enforcement, but leaves untouched state and local police  – where the vast majority of policing actually happens. 

Even after the policy is implemented, state and local law enforcement are still free to use AI technology as they have, largely without any oversight or accountability whatsoever. The White House missed an opportunity to explore options for using the federal government’s leverage – primarily through providing financial assistance like grants – to incentivize states and localities to implement the same guardrails it is requiring of federal law enforcement. 


The Big Picture

The White House’s new AI policy  has the potential to usher in a promising paradigm shift for the use of AI in law enforcement – but only if it is implemented rigorously and faithfully.

Agencies must take the policy seriously and see it for what it is: a clear recognition that the status quo of rampant, unregulated use of AI is extremely dangerous and cannot continue. 

Waivers and exceptions should be few and far between. Transparency disclosures must be rigorous and thorough – not a matter of checking boxes and paying lip service. And process must shape outcome: if a technology is found to raise serious concerns, that must meaningfully factor into decisions about how it's used. 

Federal law enforcement must follow OMB’s policy in good faith and devote the appropriate concern and resources to mitigating those harms, and we look forward to holding them accountable.