Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop (2025)

Chapter: 4 Key Considerations for Predictive Policing Technologies

Previous Chapter: 3 Person-Based Predictive Policing
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

4

Key Considerations for Predictive Policing Technologies

This chapter delves into considerations for the implementation of both person-based and place-based predictive policing technologies. While these approaches differ in their specific focus, many of the considerations explored are applicable to both approaches. Where considerations apply to a single approach, this is noted. Key messages include the following:

  • Using historical data in place-based predictive policing tools can perpetuate systemic biases and discrimination. (Cummings)
  • Racial segregation can impact implementation of place-based predictive policing, potentially perpetuating discrimination and targeting of historically marginalized communities. (Richardson)
  • Predictive policing (both place-based and person-based) raises questions about the applicability of traditional legal constraints on policing, including Fourth Amendment protections. (Joh)
  • Effective community engagement is a cornerstone of effective crime prevention and as such could be an essential element in future development and implementation of place-based predictive policing approaches. (Gill)
  • Predictive tools can reinforce biases, exacerbating existing inequalities. (Redmiles)
  • Crime is a systemic issue; crime prevention and reduction require systemic solutions. (Musa)
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

LEGAL CONSIDERATIONS

Reliance on data in policing is not new, said Elizabeth Joh, University of California, Davis. Traditional policing involves sifting through and assessing information about who is suspicious and who is not, and which areas might require greater police attention. In the traditional model, these data come from direct police investigation, human surveillance, tips from citizens, and paper records. The regulatory framework for policing, Joh said, is premised on this traditional model of human decision making. For example, the Fourth Amendment requires that an officer conducting an automobile stop or pedestrian stop (i.e., stop and frisk) has a “reasonable suspicion” that the person is involved in criminal activity. The legal quantum—that is, the minimum amount of justification that an officer needs—is based on the totality of the circumstances. The Supreme Court of the United States has described “totality of the circumstances” in several ways, said Joh, but one description states that “the police can rely on inferences from and deductions about cumulative information.” This is an intentionally flexible standard; the Supreme Court has resisted pressure to provide a quantification or calculation of the reasonable suspicion standard. Flexibility of the standard allows for incorporation of many kinds of data sources, Joh noted.

Predictive policing represents an acceleration in technology that raises questions about how these traditional standards apply, said Joh. Because the reasonable suspicion standard is flexible, predictive algorithms are not likely to be reined in by Fourth Amendment constraints, she stated. Because courts have allowed police to use many sources as part of the totality of the evidence—including informants, tips, and observation—predictive policing analyses will also likely be folded into the totality of the evidence, she explained.

Joh agreed with Jim Bueermann’s earlier observation that predictive policing is becoming one aspect of a larger automated police ecosystem. This ecosystem will likely combine predictive policing algorithms with tools like video analytics, license plate recognition, gunshot detection, and social media analysis, using a platform designed and sold by a private company, she said. This future ecosystem will inform police decision making, and the reallocation of decision making to algorithms poses a significant challenge for regulating police activity through traditional constitutional constraints or federal and state regulations.

Another major challenge, Joh noted, is the technology-to-human “handoff problem.” When a technology produces a prediction or conclusion, what should an officer do with this information? Resulting actions may vary from agency to agency and officer to officer. Depending on how technology-generated information is used, unintended and potentially

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

harmful consequences may result—consequences that were not anticipated by the agency or the technology developer, she said. This is not an issue with the technology itself, she clarified, but with the interface between the technology and the police. For example, an officer may decide to focus on an area of town because that officer has previously seen multiple crime predictions for that area. This decision is not based on traditional policing information (e.g., observations, tips) nor on a specific prediction. Instead, the officer’s decision to focus on the area is a human judgment based on awareness of past predictions produced by predictive policing tools; this kind of extrapolation is not the intended use of such technology, said Joh.

Despite a great deal of uncertainty about predictive policing technologies, these tools are being used and integrated into everyday policing. Use of such technologies in policing is growing, said Joh, so it is critical to address questions about regulation of predictive policing and related tools now. Rather than assuming that traditional constitutional frameworks are adequate for ensuring appropriate use of these tools, a proactive regulatory framework could address predictive policing as part of an automated ecosystem of police technologies. Joh cautioned against willingness to treat these technologies as something that could be applied experimentally. She echoed previous speakers in noting that in the medical field, regulators conduct thorough cost-benefit analyses of drugs before they are made publicly available. Police technologies also have the capacity to harm people, families, and communities, yet such cost-benefit analyses are generally not conducted before these new technologies are implemented in the field.

COMMUNITY ENGAGEMENT

Charlotte Gill, George Mason University, presented best practices for community engagement in policing, related to the use of place-based predictive policing approaches. Community engagement, she said, is a prerequisite for effective, ethical, democratic policing. The community is the foundation of all social institutions and provides the mandate for policing—law enforcement “for and of the people” must balance the maintenance of order and public safety with a community’s ability to exercise its freedoms equitably and safely, Gill stated.

This is a delicate balance, said Gill—one that requires commitment to the idea of co-production of public safety. Co-production means that the police and the community engage as equal partners in maintaining public safety. This model of collaboration is not just about “doing the right thing,” Gill noted, but can also be the cornerstone of effective crime prevention programs. In fact, scholars have suggested that all evidence-based policing efforts be viewed as tactics aimed at establishing community trust and legitimacy, she pointed out.

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

Prioritizing community engagement in policing and crime prevention efforts has many benefits, said Gill. First, police do not have the capacity or knowledge to address all underlying causes of crime. In the absence of community-based alternatives, communities rely on police to respond to a variety of social problems that extend well beyond the scope of law enforcement. Second, coordination between police and other stakeholders within a specific geographic area leads to effective localized solutions; such solutions are often more effective and sustainable than those achieved by a single stakeholder. Third, community participation ensures representation of diverse voices, said Gill. While broad community participation is not a cure-all for historic injustices perpetuated by police and other government institutions, she noted, empowering community members to participate in and even lead crime prevention efforts can help mitigate potentially alienating effects of police-directed strategies.

Although community engagement is an important aspect of effective policing strategies, said Gill, it also presents significant challenges that fall under three main themes: authenticity and representation, leadership and expertise, and data and outcomes. Authenticity and representation, she said, require genuine collaboration. Gill and colleagues conducted a systematic review of “community-oriented” policing programs and found that the majority only passively or indirectly involve the community (Gill et al., 2014). For example, police may engage in “knock-and-talk” efforts, in which they go door to door to ask residents about their safety concerns, but they do not involve residents in the development of responses to these concerns. Engendering genuine community participation takes time, effort, and attention, said Gill, especially in places where relationships with police are historically strained. Another challenge may be balancing the competing interests of various constituencies within the community.

Issues of leadership and expertise can also present challenges in community engagement, said Gill. Police are used to being the “experts” on crime prevention but may not have all the expertise or resources necessary to do so. Ceding control and deferring to the expertise of other decision makers and community members requires humility, she said. Determining who represents the community can also be challenging. It is important for law enforcement both to consider the perspectives of those individuals who participate and to reach out to those whose voices may be missing. For community engagement efforts to succeed, Gill said, it is important to both recognize past harms and develop a process to encourage productive conversations without perpetuating participants’ trauma experiences.

Finally, said Gill, collaborative community efforts can present challenges related to data and outcomes. One issue particularly relevant to predictive policing involves the ways data are collected and used. Do data perpetuate existing biases? Do data represent community concerns or police

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

priorities? For example, she said, community members may be particularly concerned about an issue, but if they do not contact police, the issue will not appear in official data.

In terms of outcomes, collaborative efforts between the police and the community may not immediately demonstrate measurable success—it is important to pay attention to the success and sustainability of the partnership itself, said Gill. A collaborative partnership may result in changes to measurable outcomes, but in some cases the impact may appear to be negative. When the police and community work together, she explained, the community may be more willing to report crimes to the police. If police crime records are being used to measure program success, this “crime reporting sensitivity effect” could attenuate the measured effectiveness of promising programs.

To address the challenges of establishing effective and sustainable community partnerships, Gill shared lessons from research on successful community coalitions (Gill et al., 2024). Three key themes emerge from this body of research: cultivating engagement, building capacity, and avoiding assumptions. Cultivating engagement means encouraging active participation that grants genuine empowerment and ownership to all stakeholders involved. Strategic mobilization is needed to turn participation into action; this involves planning, establishing leadership and expectations, designing appropriate strategies, and weaving the coalition into the fabric of the community (Gill et al., 2024). It is important that outreach and recruitment for coalition membership align with the culture and expectations of the community and are targeted toward those most affected by the issue.

Capacity building is another essential part of successful community coalitions, Gill said. Building member capacity involves enhancing members’ communication and conflict resolution skills; it may also require providing resources to allow members to fully participate (e.g., childcare). Relational capacity is built through developing internal processes to engender a culture of trust, equity, and shared vision. Building organizational capacity requires developing expectations for leadership, roles and responsibilities, and participation. Finally, said Gill, programmatic capacity is built through developing a clear implementation and evaluation plan.

All members of community partnerships should avoid assumptions about other members, said Gill. She encouraged decision makers who want to learn about and partner with communities to reach out through grassroots community organizations, which frequently have considerable power, reach, and historical knowledge. These organizations are often trusted by community members who may not otherwise engage with formal institutions, she said.

Given these lessons about community engagement in policing in general, Gill offered several specific considerations for community engagement

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

in predictive policing programs. First, she noted the importance of early and active community engagement; it is best for engagement to begin prior to purchasing or deploying a predictive policing technology and to continue throughout the process. Second, aligning the predictive policing approach with the community’s public safety priorities is critical. For example, the targeted criminal behavior needs to reflect the issues most important to the community, and the data used as inputs should not perpetuate biases or affect community members’ privacy and civil rights, said Gill. The guiding principle, Gill said, must always be whether the data and locations used in predictive policing reflect the issues that are most important to the community.

In conclusion, Gill said that no policing intervention on its own can solve all issues important to a community. When considering the use of a technology or approach, it is important for relevant actors to consider how it fits within the broader community ecosystem. Building relationships and capacity with diverse constituent groups before a new initiative is deployed is beneficial. Furthermore, it is important for community engagement to be sustained throughout program development, implementation, and evaluation rather than “treating engagement as another box to check.” Gill said that despite the challenges, the development of new policing initiatives can be an opportunity for meaningful, long-term community engagement.

During the question-and-answer session, Nancy La Vigne shared an example of how community engagement can make predictive policing models more effective. The Newark Public Safety Collaborative,1 launched by the School of Criminal Justice at Rutgers University-Newark, uses risk-terrain modeling to generate maps showing where gun violence is concentrated. With this information in hand, the community discussed why violence seemed to be concentrated around bodegas. Community members interviewed bodega owners and customers to get their perspectives. After several community discussions, the project installed lighting in the neighborhood and levels of certain violent crimes decreased, including in some areas with bodegas. Despite the critiques of predictive policing, La Vigne encouraged that we “not throw out the baby with the bathwater”; predictive policing tools can be a starting point for authentic engagement and collaborative work to address persistent public safety issues in communities.

ETHICAL ARTIFICIAL INTELLIGENCE

Renée Cummings, University of Virginia, spoke broadly about the past and future ecosystems of police technology. Efforts to use data in policing have had names ranging from “problem-oriented policing” to “evidence-

___________________

1 https://newarkcollaborative.org/

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

based policing” to “precision policing.” Regardless of the name and the specific approach, Cummings said, communities may feel re-victimized, re-traumatized, and re-marginalized if these tools are used to oversample high-need, under-served, under-resourced communities. Data-driven approaches based on historical data sets risk repeating past prejudices and amplifying discrimination, she said. Using data to drive decision making can be enormously powerful, said Cummings, but has the power to do harm as well as good. There is “extraordinary promise” in artificial intelligence (AI) and machine learning, but potential risks include algorithmic bias, data bias, bad actors, and disinformation.

Cummings agreed with Bueermann that it is important to ensure that technologies used in policing have robust safeguards, ethical guidelines, and oversight mechanisms in place. Since new technologies can perpetuate existing biases and inequities, said Cummings, it is important to see into the “black box,” to understand how data are being used to make decisions. While there is an appearance of neutrality and presumption of objectivity in data and AI, these tools are not free of human judgment. Cummings shared her “Blueprint for an AI Bill of Rights,” which outlines the basic rights to be protected:

  • The right to safe and effective systems
  • Protection from algorithmic discrimination
  • The right to data privacy
  • Notification of usage and explanation of implications
  • Ability to opt out and choose human alternatives when appropriate

Cummings noted that a federal memo published in 2024 established baseline protections for civil rights and safety that all federal agencies must meet in their AI use. This memo builds on the 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.2 People have the right to know when a technology is being deployed, and where, when, why, and how it is being used. Furthermore, they have the right to know the limitations of the technology and what safeguards and protections are in place. New technologies are perpetuating many of the old stereotypes, said Cummings, and it is essential to understand how inputs lead to outputs. Additionally, it is important for tools and the models to be audited and assessed for impact before they are used on people. Cummings suggested that oversight of data science and AI could be accomplished through an institution similar to the U.S. Food and Drug Administration; such an organization could ensure appropriate oversight,

___________________

2 https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

robust security, rigorous accuracy, traceability, and detailed documentation. This is the time, she said, for a “radical re-imagining of predictive analytics” that centers innovation and public safety around community engagement and collaboration. Technology is a tool that can create an “extraordinary amount of harm,” Cummings stated. Technology should be designed to include, empower, liberate, and advance the public interest and generate public benefit, she concluded.

ROLE OF SEGREGATION

Rashida Richardson, Northeastern University, presented an analysis of the role of racial and residential discrimination in shaping the data input into place-based predictive policing tools, and explained how those data can reinforce bias and discrimination. She challenged the claim by some public officials and law enforcement leaders that adoption of predictive policing technologies can produce fairer and more constitutional policing. This argument, she said, is based on the idea that place-based predictive policing tools enable more objective, unbiased decision making. Richardson argued that this claim overlooks the inextricable link between race and space in the United States due to centuries of racial segregation. This means that most data about space can introduce proxy bias and therefore skew the outputs of a predictive policing system, exacerbating and possibly concealing or distorting racially biased policing.

Notably, crime data reflect only those crimes reported or identified by law enforcement, Richardson explained. Racial segregation, including practices of redlining, has resulted in the concentration of societal problems in lower-income, non-White neighborhoods, which are then the subject of increased police presence, resulting in disparities in law enforcement stops, arrests, and other interactions that generate crime data. Available crime data thus reflect the results of discrimination, which can then skew the results of predictive algorithms using those data. This is the proxy bias problem, she said. Police decision making based on these data could reinforce and deepen existing inequalities.

Turning specifically to place-based predictive policing, Richardson noted that many place-based predictive policing systems are designed based on hot-spot policing theory, which asserts that reported crimes tend to aggregate or cluster in relatively concentrated geographic areas. However, Richardson stressed, data documenting concentration of crime are skewed by the results of racial discrimination. As a result, the use of place-based predictive policing systems can generate a feedback loop in which police reproduce and reinforce historical patterns of segregation. First, historical practices of segregation concentrated societal problems in lower-income and non-White neighborhoods. This drives greater law enforcement pres-

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

ence to those areas. The increased police presence compounds disadvantage and reinforces segregation patterns.

Richardson then described research documenting this feedback loop (Mehrotra et al., 2021; Sankin et al., 2021); an analysis by Sankin et al. (2021) found that racially segregated neighborhoods, particularly those with the lowest proportion of White residents, had the most crime predictions under a particular place-based predictive policing algorithm (Sankin et al., 2021). Black or Latino residents lived in the most-targeted geographic areas compared to the jurisdiction overall. The algorithm recommended greater police scrutiny of Black and Latino residents than White residents, and the number of predictions in various geographic areas increased as the Black and Latino proportion of the population increased, and in areas where households with low incomes or public housing were concentrated.

Because racial segregation constrains and informs data used to develop and deploy place-based predictive policing systems, Richardson said that developers of these technologies and public officials driving their adoption must consider the local, social, political, and historical context of segregation to assess whether deployment of these technologies may cause harm. She called for greater monitoring of claims made by place-based predictive policing commercial vendors, and she suggested that the field would benefit from clear procurement guidance for local law enforcement agencies and best practices for deployment, developed in consultation with experts, advocates, community leaders, and practitioners. She noted the potential value of the U.S. Office of Management and Budget’s AI governance and risk management guidance for predictive policing technologies used by federal agencies or funded by federal grants. Richardson argued for federal granting policies requiring that predictive policing technologies require publicly available pre- and post-adoption evaluations. Place-based predictive policing technologies will continue to perpetuate societal structural inequities if the root causes of inequities, like segregation, are not fully evaluated and considered in the technology development life cycle and in government technology procurement decisions, concluded Richardson.

PERSPECTIVES FROM COMPUTER SCIENCE

Elissa Redmiles, Georgetown University, brought the computer science perspective into the workshop’s conversation. She identified two key considerations around trust in technology—whether the technology works and whether it is fair. The first challenge in determining whether a predictive policing technology works, Redmiles said, is that these tools often predict an outcome that cannot be validated. Such algorithms aim to predict the risk of committing a crime, but the only outcome data available are arrests, which are incomplete—arrest data could include wrongful arrests and

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

would not include any crimes for which an arrest was not made. The second challenge is that predictive tools are often no more effective than human predictions. Redmiles cited a 2020 study into the limits of human predictions of recidivism, which found that lay people made predictions that were about as accurate as an algorithm (Lin et al., 2020). Finally, said Redmiles, the assumed goal of predictive policing tools is to reduce crime, but there is a difference between accurately predicting future crimes and preventing them. A prediction of crime may lead police to observe more crime and make more arrests, but it may not prevent crimes from happening.

In addition to challenges involved with determining effectiveness, predictive policing tools may also have unintended consequences, said Redmiles, noting that even collection of public data can lead to harm. Redmiles shared the example of an anti-trafficking text messaging system used to find potential trafficking victims. Redmiles and colleagues interviewed the developers and users of this system, as well as individuals contacted through the program, and identified instances in which private information was leaked (Bhalerao et al., 2022). Another example of a harmful consequence resulting from predictive policing, Redmiles noted, was the shooting of Robert McDaniel in Chicago. These systems can be a self-fulfilling prophecy, she said, with the prediction of being involved in a shooting leading to a shooting.

Inputs used in predictive policing algorithms can also lead to data feedback loops, Redmiles explained. In computer science, this is called the problem of performative prediction. To address these issues, Redmiles offered three suggestions: (a) ensure that the outcome used to measure success matches the prediction, (b) justify predicting negative behavior versus positive intervention efficacy, and (c) articulate how the prediction will turn into action and use expert and community feedback to identify potential consequences, self-fulfilling prophecies, and feedback loops. Redmiles noted that while it is important to ensure that quality data are used and algorithms make accurate predictions, it is equally important to focus on how the prediction will be used and to carefully consider potential negative consequences.

In addition to accuracy concerns, concerns about the fairness of person-based predictive policing are also an issue, said Redmiles. Even if a tool is generally accurate, it may be more or less accurate as applied to various people or groups. There are multiple ways to measure a tool’s fairness, said Redmiles. For example, one measure might look at whether outcomes are equal between groups, while another looks at whether the incidence of false positives or false negatives is equal between groups. At least 21 computational definitions of fairness exist, said Redmiles, but they cannot all be achieved simultaneously, requiring difficult decisions about which definitions of fairness to prioritize.

Another approach for determining whether something is fair is asking the public. When asked about fairness, people’s perceptions vary based

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

on their personal experiences, ideologies, demographics, and the specific application of the information. People care not only about the fairness of outcomes but also about procedural fairness, she said. For example, is it fair to use certain data to make a particular decision? When considering how to make a system “fair,” Redmiles urged stakeholders to seek input from both experts and the public; to gather viewpoints of people with diverse perspectives, experiences, and demographics; and to be mindful of “tokensim.”3 Furthermore, aligning computational fairness metrics with public perception may be beneficial, she said.

Closed systems breed mistrust, argued Redmiles. Courts and legislatures have recently recognized the importance of transparency in data and algorithms, particularly those with a direct impact on people’s lives. For example, the Wisconsin Supreme Court requires proprietary risk assessment systems to contain disclaimer warnings. Responsible computer science programs have benchmarks, are publicly accessible, and are audited internally and externally, said Redmiles. Unsolicited, independent audits—such as those by journalists or academics—can only be conducted if systems are accessible and open. Redmiles urged stakeholders procuring or funding predictive policing systems to require external audits, to require ongoing access for unsolicited independent audits, and to publicly report incidents, outcomes, and design decisions. These measures can both lead to more robust and accurate systems and create public trust.

SYSTEMIC PROBLEMS

“Systemic problems call for systemic change,” said Jumana Musa, National Association of Criminal Defense Lawyers (NACDL). Policing aims to solve a problem, she said, and how the problem is defined will determine how the solution is generated. Over the years, society has focused less on factors that improve communities, such as education, housing, social services, and healthcare, said Musa. Meanwhile, society has increased investment in policing and prisons and moved away from social services and safety nets. At the same time, evolving iterations of predictive policing technologies have increased law enforcement’s ability to surveil and garner private details from individuals, she said, and uses of predictive policing technologies are “incredibly powerful […] and concerning.” Musa noted that most organizations do not refer to the tools under discussion in the workshop as “predictive policing”; instead they refer to data analytics, intelligence-led policing, or precision policing.

___________________

3 Tokenism refers to the practice of making a superficial effort to include marginalized groups without genuinely valuing their contributions or addressing systemic inequalities.

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.

NACDL’s Task Force on Predictive Policing spent several years researching data-driven policing by talking to academics, technologists, law enforcement officials, community groups, and companies. Musa shared a quote from mathematician Ben Green (2019):

In the hands of police, even algorithms intended for unbiased and non-punitive purposes are likely to be warped or abused. For whatever its underlying capabilities, every technology is shaped by the people and institutions that wield it. Unless cities alter the police’s core functions and values, use by police of even the most fair and accurate algorithms is likely to enhance discriminatory and unjust outcomes.

NACDL sees racism, over-policing, and mass incarceration as structural, systemic problems that cannot be solved with predictive policing tools, said Musa. She argued that such tools divert resources and funds from communities—funds that could be allocated toward social services and community-led public safety initiatives. Predictive policing tools also have the potential to hyper-criminalize individuals, families, and communities of color, and they may create, replicate, and exacerbate “self-perpetuating cycles of bias.” Police are not meant to take on all of society’s ills, and investing in predictive policing tools is often done at the expense of structural solutions that could help communities, Musa said.

However, said Musa, because these technologies continue to be used, NACDL has issued recommendations to mitigate harms associated with their use. The association has called on the Department of Justice to “1) carefully consider whether federal law enforcement agencies should use these technologies at all; 2) condition federal funding on strict and expansive validation and disclosure requirements; and 3) increase requirements on companies providing the technology to open their systems to external validation and review by the criminal legal system” (Musa, 2024).

Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 31
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 32
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 33
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 34
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 35
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 36
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 37
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 38
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 39
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 40
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 41
Suggested Citation: "4 Key Considerations for Predictive Policing Technologies." National Academies of Sciences, Engineering, and Medicine. 2025. Law Enforcement Use of Predictive Policing Approaches: Proceedings of a Workshop. Washington, DC: The National Academies Press. doi: 10.17226/28036.
Page 42
Next Chapter: 5 Community Responses to Predictive Policing
Subscribe to Email from the National Academies
Keep up with all of the activities, publications, and events by subscribing to free updates by email.