The second category of FRT-related legislation focuses on regulating FRT’s actual operation.

Transparency in Use & Impact Assessments

Several legislatures are currently considering artificial-intelligence-specific pieces of legislation that would promote transparency in and/or set standards for the types of algorithms that are central to the functioning of FRT.

The Algorithmic Accountability Act of 2019 (S.1108, Sen. Wyden (D); HR2231, Rep. Clarke (D)), currently before both houses of Congress, would direct the FTC to require entities that use, store, or share personal information to conduct automated decision impact assessments and data protection impact assessments. The AI in Government Act of 2019 (HR2575, Rep. McNerney (D)) would require each federal agency to solicit public feedback in developing a governance plan concerning the agency’s applications of artificial intelligence and make this plan publicly available online.

At the state level, a bill before the Washington legislature (HB1655, Rep. Hudgins (D) and others) would establish guidelines for government use and procurement of automated decision systems. The New Jersey Algorithmic Accountability Act (AB5430, Asm. Zwicker (D) and others) currently before that state’s legislature would require certain businesses to conduct automated decision and data protection impact assessments.

Relatedly, many of the CCOPS-inspired legislation discussed above – Berkeley (Cali.), Davis (Cali.), Cambridge (Mass.), Seattle, and Yellow Springs (Ohio) – include requirements that law enforcement draft a use policy and present that policy to government officials in order to obtain authorization to acquire and use FRT. These use policies address topics such as the technology’s purpose; authorized and prohibited uses, including the rules and processes required prior to use; who can access collected data and how; safeguards to prevent unauthorized data access; safeguards against any potential violation of civil liberties; how information collected may be accessed by the public; length of data retention; third-party data-sharing; training required for individuals authorized to use the technology or the data collected with it; and mechanisms to ensure the policy is followed and to monitor for misuse. The Bureau of Justice Assistance of the US Department of Justice has also issued a Face Recognition Policy Development Template for State, Local, and Tribal Criminal Intelligence and Investigative Activities.

Use policies are also contemplated by enacted or proposed legislation directed at biometric privacy. For example, two New York state bills (S01203, Sen. Ritchie (R); A01911, Asm. Gunther (D) and others) would require private entities in possession of biometric identifiers (including scans of face geometry) to develop a written policy establishing a retention schedule and guidelines for permanent destruction of the identifiers. This proposal mirrors an existing requirement under the Illinois Biometric Information Privacy Act.

Accuracy Requirements

Inaccuracy and racial disparities are frequently cited concerns when it comes to FRT, and for good reason. Numerous reports, research papers, and tests – including the First Report of the Axon AI & Policing Technology Ethics Board, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, and a 2018 test conducted by the ACLU – substantiate these concerns. Some legislation has aimed to address them by imposing accuracy requirements.

Legislation currently being drafted at the federal level but not yet introduced would require certain face recognition algorithms be audited by the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST), the same government agency that currently operates the Face Recognition Vendor Test. Legislation could require, for example, that NIST set strict standards not permitting racial disparities in false positive and false negative rates, and require algorithms to meet minimum standards under a variety of conditions.

Washington state bills (SB 5376, Sen. Carlyle (D) and others; HB1854, Rep. Kloba (D) and others) would impose limits on FRT use as part of wide-ranging privacy legislation, including a requirement to verify the accuracy of FRT prior to use.

Court Orders & Cause Standards

Dating back many years, several jurisdictions have considered requiring judicial authorization or the existence of probable cause prior to state use of FRT being authorized. Maryland’s Face Recognition Act (HB1148, Del. Sydnor (D)), introduced in early 2017 before being withdrawn, would have established probable cause standards – and required judicial authorization in certain circumstances – for law enforcement to run certain facial recognition searches. A failed 2002 bill (HB454, Del. Griffith (R)) in the Virginia state legislature proposed to prohibit localities and law enforcement agencies from using FRT prior to complying with certain criteria for a court order.

More recently, a New York State bill (A01692, Asm. Abianti (D)) currently under consideration would prohibit the state, state agencies and departments, and contractors doing business with the state, its agencies or departments from retaining facial recognition images or sharing such images with third parties without legal authorization by a court.

The Federal Police Camera and Accountability Act (HR3364, Reps. Norton (D) and Beyer (D)) would prohibit FRT being used on video footage obtained from police body cams and dashboard cams except with a warrant issued on the basis of probable cause.

In 2012, an investigative report by the provincial Information and Privacy Commissioner held that the province’s Freedom of Information and Protection of Privacy Act prohibited the Insurance Corporation of British Columbia (a public body that issues license and identification cards) from using its facial recognition software to assist police with their investigations in the absence of a subpoena, warrant, or court order. The report was issued in the context of the Corporation offering assistance to police in the aftermath of a riot that broke out following the Vancouver Canucks Stanley Cup playoff loss.

In a variety of settings, legislatures are considering or have imposed notice and/or consent requirements that restrict FRT use.

At the federal level, the Congressional Commercial Facial Recognition Privacy Act of 2019 (S.847, Sen. Blunt (R)) would prohibit certain entities from using FRT to identify or track an individual without first obtaining affirmative consent, which involves “an individual, voluntary, and explicit agreement to the collection and data use policies” of an entity.

At the state level, proposed California legislation (AB1281, Asm. Chau (D) and others) would require a business using FRT to disclose that usage at its entrance and provide information about the purposes of its use. A bill before the Massachusetts legislature (S.1429, Sen. Montigny (D)) would increase the transparency around use of DMV photos for FRT purposes, including requiring notices to be posted at licensing offices regarding law enforcement searches of license and identification photographs through targeted face recognition. Washington state bills (SB 5376, Sen. Carlyle (D) and others; HB1854, Rep. Kloba (D) and others) would impose limits on FRT use as part of wide-ranging privacy legislation, including requiring consent from consumers prior to deploying FRT in physical premises open to the public.

In Canada, Alberta’s Information and Privacy Commissioner opened an investigation in August 2018 under that province’s Personal Information Protection Act (the provincial private sector privacy law) concerning the use of FRT without consent at shopping centers in Calgary. Canada’s Privacy Commissioner opened a parallel investigation into the same issue under the federal Personal Information Protection and Electronic Documents Act (the federal private sector privacy law). Both investigations are ongoing.

Limits on Using FRT-Generated Evidence

In the state of Washington, HB1654 (Reps. Ryu (D) and others), originally a moratorium bill, has since been superseded by a substitute bill that simply provides a police officer may not use the results of a facial recognition system as the sole basis to establish probable cause in a criminal investigation. This requirement is similar to various police department policies, including that of NYPD, which state that the results of a face recognition search are meant to provide investigative leads and should not be treated as a positive identification. As Georgetown researchers write, "In theory, this is a valuable check against possible misidentifications . . . However, in most jurisdictions, officers do not appear to receive clear guidance about what additional evidence is needed to corroborate a possible face recognition match."

Reporting Requirements

Many of the CCOPS-inspired legislation discussed above – including Berkeley (Cali.), Davis (Cali.), Cambridge (Mass.), Seattle, and Yellow Springs (Ohio) – include periodic reporting requirements to public bodies after the technology is deployed. These periodic (often annual) reports provide information on matters such as how the technology has been used, the quantity of data gathered, the sharing of data (if any) with outside entities, geographic deployment, complaints (if any) about the technology, results of internal audits, information about violations or potential violation of use policies, requested modifications to the policies, data breaches, effectiveness, costs, and whether the civil rights or liberties of any communities or groups are disproportionately impacted by the surveillance technology’s deployment.

A reporting requirement currently under consideration as part of a broader FRT policy in Detroit would require police to provide a weekly report to the police Board that includes the number of facial recognition requests fulfilled, the crimes the request were attempting to solve, and the number of leads produced from the FRT.