Introduction
Each day, hundreds of thousands of people across the nation dial 911. They do so seeking help with a range of problems and concerns: violent crime, substance use, animal control — and everything in-between.
It is the role of the 911 dispatcher to assign these calls to an appropriate responder. That role is pivotal. A dispatcher must use their judgment and training to assess the situation at hand and determine what is needed — police, fire, emergency medical services, or something else entirely.
In the absence of specialized services for emergencies outside of police, fire, and EMS, police often are the default responders to calls for service that are not medical or fire-related. This is so even when a law enforcement response is not necessary. Indeed, some have described police as “society’s only 24-hour general purpose responder.” Police are dispatched to conduct welfare checks, manage houseless populations, and handle issues resulting from substance use — even when there isn’t any concern that would call for enforcement of the law or the use of force.
This default reliance on police for first response carries with it real consequences. Officers routinely are called upon to intervene in situations for which they have received minimal or even no training. Many officers are ill-equipped to handle complex social problems such as homelessness and substance use disorders. Often this means that the problems that caused people to call for help never are resolved. Meanwhile, officers’ time and attention are drawn away from situations that genuinely require law enforcement expertise.
Moreover, reliance on police for first response carries with it the potential for escalation. With the arrival of police comes the possibility of arrests and force — including lethal force. These harms are borne disproportionately by Black and brown communities, as has been well documented.
For these reasons, jurisdictions have been exploring alternative response models, which rely on actors other than police.5 Mental health specialists, for example, can be dispatched to stabilize individuals who are in crisis. Several jurisdictions already have begun experimenting with alternative approaches to traffic collision response, utilizing non-police accident responders for collisions that don’t involve physical injuries. Other jurisdictions are deploying trained mediators and community service officers to address communal conflicts like noise complaints.
What AI Can Offer
As these and other models emerge as viable solutions to our overreliance on police for first response, technology could play a critical role in ensuring the most appropriate responders are deployed.
AI could help dispatchers identify calls that are suitable for diversion to alternative responders. Through natural language processing — a branch of AI focused on the interaction between computers and human language — calls to 911 centers could be transcribed in real-time, enabling automatic flagging of key words and phrases that suggest a situation might benefit from a non-police response. For instance, AI systems could be used to identify calls that are related to mental health or substance use issues, where there are no indicators of a need for law enforcement (such as violent behavior).
Some jurisdictions have undertaken promising pilots of this sort of technology. In Denver, an alternative response program called “STAR” sends behavioral health professionals and paramedics to people in distress. The STAR program has begun testing software from Corti AI, using it to analyze past calls to identify ones that may have been eligible for STAR diversion. In the future, STAR hopes to implement the system for real-time identification and rerouting of STAR-eligible calls.
Another promising, albeit distinct, application of AI is to equip dispatchers with crucial contextual information about callers. Some advanced 911 platforms now leverage AI to sift through data and retrieve essential details, such as information from a person’s “Smart911” profile. These profiles, which can be created by individuals or healthcare workers on behalf of at-risk patients, can include mental health information and contact information for friends and family members. The goal is to provide dispatchers with a more complete picture of the situation, ultimately leading to more informed decisions and better outcomes for those in need.
Although these technologies are still in their infancy and not yet implemented widely, the potential for increasing diversion from police to alternative responders is clear. With proper funding and development, these AI tools could revolutionize how emergency services respond, operating a more tailored and effective approach to public safety.
Key Considerations for Policymakers
As jurisdictions explore the use of AI in 911 call centers, they should consider what policies might be needed to maximize the potential benefits while minimizing potential harms.
Pilot Programs. Before widespread adoption, pilot programs should be used to provide real-world testing of AI systems in call centers. For example, Denver’s pilot program, discussed above, currently is being used to analyze past calls. Only if the system works will the system be implemented to analyze calls in real time. Pilot programs provide a supervised environment to closely monitor and assess novel technologies, enabling informed decision-making around their broader adoption.
Training. Call center personnel should receive comprehensive training to ensure effective oversight of AI systems. Humans, not AI tools, should always make the final decision in determining whether alternative responders should be deployed in specific situations.
Use Policies. Clear policies are essential for effective implementation of AI in 911 call centers. These policies should govern who has access to data and under what circumstances. This is particularly important in the context of systems that provide dispatchers with historical information about callers, including sensitive data such as mental health histories. Policies should also address data accuracy. Current addresses, mental health statuses, and contact information of family and friends, for instance, should be updated regularly to ensure that the right care is given to the right caller.
Moreover, policies should ensure that any sensitive data collected for purposes of promoting alternative response are not reappropriated for police use in investigations. The use of this data for investigative purposes may well prove counterproductive in the long-run, diminishing trust and making it less likely that individuals will volunteer information that could prove crucial in improving future outcomes.
Finally, there is a very real concern that the types of information collected in caller profiles (such as their history of mental illness) might be used to assign risk to callers, which might reinforce biases or escalate situations. Careful safeguards — including that information in caller profiles may not be used for risk scoring (assigning a likelihood of criminality to a person based on various factors) — are necessary to ensure that these tools are used responsibly.
For more resources on the use of AI in first response, please visit policingproject.org/ai-policy-hub.