AI Governance Framework: Preliminary Draft

Artificial intelligence is reshaping law enforcement.

New AI-enabled tools are promising police unprecedented abilities to identify individuals, track movements, detect crimes in progress, and more. These technologies hold genuine potential, but they also carry serious risks — including inaccurate outputs, intrusions on individual privacy, and lack of accountability. For these reasons, sound and balanced governance of AI use in public safety is imperative.

Our AI Governance Framework is intended to serve as a roadmap for such governance. It offers policymakers and agencies clear, concrete guidance designed to ensure that emerging policing technologies are used only when they effectively promote public safety and in a manner that is responsible and accountable to the public. It addresses the entire “pipeline” of  AI governance — from the evaluation of tools during procurement to substantive rules around privacy and equity to mechanisms for oversight and accountability.

This version of the framework is an early work in progress — in essence, our “beta” version. We have shared it with law enforcement leaders, technology developers, civil liberties advocates, community activists, and other stakeholders, and will be incorporating their feedback and updating the framework here as we do so. That engagement will continue, and we expect to make substantial revisions as we continue to learn. We are publishing this version now because AI is being deployed in policing today, often in the absence of clear rules, and the public debate demands concrete proposals. Transparency about our current thinking — and openness to critique and feedback — is a core part of our approach. We intend for this early version of the framework to spur discussion.

Although we initially designed the framework as a regulatory model, we also intend for it to serve as a practical resource for law enforcement agencies navigating AI adoption. Future iterations will incorporate model policies and operational guidance that implement the framework’s principles and that agencies can use directly in their jurisdictions. Our goal is not only to inform lawmakers, but also to support agencies in deploying AI in ways that are lawful, effective, and worthy of public trust.

Section 1: Scope | Download

Defines the scope of AI systems that are covered by the framework and clarifies that the framework applies only to AI capabilities that are rights or safety-impacting.

Section 2: Regulatory Authority | Download

Describes the state regulatory agency or agencies needed to enforce the framework protections.

Section 3: Approving AI Systems | Download

Requires approval for the use of “Covered AI Systems” that fall under the Framework’s scope prior to deployment.

Section 4: Assessment & Monitoring | Download

Delineates a process by which policing agencies and regulatory authorities work together to assess both the benefits and risks of Covered AI Systems before they are deployed.

Section 5: Human Oversight | Download

Requires meaningful human oversight of Covered AI Systems, and describes methods of achieving such oversight.

Section 6: Protecting Privacy | Download

Offers guidance on protecting personal privacy, including data controls, warrant requirements, and other provisions.

Section 7: Equity | Download

Describes measures to ensure equity and fairness in how Covered AI Systems are deployed and mitigate algorithmic bias.

Section 8: Community Engagement | Download

Requires that steps be taken to engage and partner with the communities that will be impacted by deployment of AI.

Section 9: Disclosure to the Accused | Download

Requires disclosure of the use of a Covered AI System to defense counsel by prosecutors and investigators.

Section 10: Use Policies | Download

Requires that agencies adopt and disclose use policies.

Section 11: Documentation & Reporting | Download

Describes reporting requirements to ensure that Covered AI Systems meet the goals laid out in the Assessment section and comply with the other safeguards established according to this framework.

Section 12: Compliance Support | Download

Describes a Compliance Support Program to help agencies protect the privacy and security of the data that feeds into and is produced by Covered AI Systems.

Section 13: Auditing | Download

Calls for both internal and external auditing of Covered AI Systems to ensure compliance with the law, and describes certain measures to facilitate that auditing process.

Section 14: Enforcement | Download

Describes enforcement mechanisms in the case of noncompliance with the Framework’s safeguards.

Section 15: Responsible Innovation | Download

Proposes the development of programs to encourage responsible innovation, including a pilot support program, a pre-development steering program, and regulatory sandboxes.