Police disclosure of ai
The Problem
Police across the country increasingly use artificial intelligence (AI) in criminal investigations to help generate leads, gather evidence, and draft reports. This technology, however, is still new and can be unreliable and error-prone. There are already real-world instances of vehicle surveillance systems misreading license plates, leading officers to pursue and arrest the wrong person. Similarly, generative AI has a well-known “hallucination” problem in which it confidently asserts things with no basis in fact—a problem with potentially devastating consequences in an official police report. In addition, even AI that works in theory can fail in real world applications, errors that officers may be more likely to overlook due to our natural bias toward trusting advanced technology like AI. There also are serious privacy concerns and constitutional issues that come with systems that are massively more powerful than traditional police methods and that lack practical human limits.
Despite those risks, often the public and criminal defendants lack even basic transparency around whether and what AI tools policing agencies are using. This is because there are few state laws requiring police to disclose their use of AI or have publicly available policies. This lack of transparency, in turn, makes it much more difficult to ensure that police are using AI reliably, fairly, and in line with community priorities. For criminal defendants, this can even undermine their right to a fair trial, possibly resulting in a wrongful conviction (which may also mean the actual perpetrator of a crime remains free). And, without the ability to review the use of AI in court, unreliable tools or inappropriate uses of AI could slip under the radar and continue to undermine public safety over a longer period of time.
The Solution
_________________________________________________________________________________________________________________
MODEL STATUTE
Prosecutorial oversight and the adversarial criminal justice system serve as quality controls to find exactly these sorts of issues, which is why the Policing Project’s model statute on Police Use of AI requires policing agencies to disclose the use of AI in police reports. This ensures that information about police AI use is available to prosecutors, allowing them to comply with legal obligations to disclose the use of AI to criminal defendants. And, to promote public trust and enable sound policymaking, it also requires police to conduct an inventory of, and develop a publicly available policy for, any use of artificial intelligence to aid criminal investigations — whether to establish leads or corroborate suspicion, or to write police reports. By bridging a basic transparency gap around police use of AI, this model statute will help ensure the public, criminal defendants, policymakers and agencies themselves have the information they need to understand AI’s impact on public safety.
To read frequently asked questions about disclosures of police use of AI in criminal investigations, click here.
To read the full model statute on Artificial Intelligence Use and Inventory, click here.
_________________________________________________________________________________________________________________
GUIDANCE ON AGENCY AI POLICIES
As our model statute requires policing agencies to have a publicly available policy on AI use, the Policing Project has developed practical guidance on what such a policy should include. To read our policy guidance on police AI policies, click here.