Sarah Brayne, the University of Texas at Austin1 and planning committee member, began this session with an overview of person-based predictive policing approaches. Person-based predictive policing involves using data to predict who is more likely to be involved in future criminal activity, either as a perpetrator of crime or a victim of crime, she explained. Predictions are then used to inform police activity, such as where to send patrol officers. This approach, said Brayne, is based on the premise that a small percentage of people are disproportionately responsible for most violent crime and that police could reduce violent crime by focusing their resources on the highest-risk individuals. Allocating law enforcement resources toward people and places that may be at higher risk for criminal activity is not new—police have been doing this for hundreds of years, Brayne stated. However, she said, the novelty of predictive policing tools lies in the codification and quantification of these predictions. Key messages from this section included the following:
___________________
1 Following the workshop, Brayne transitioned to a new role at Stanford University.
To provide an overview of person-based predictive policing approaches, Brayne described three example applications of person-based predictive policing. First, she outlined Operation Los Angeles Strategic Extraction and Restoration (LASER), conducted by the Los Angeles Police Department (LAPD), where she previously engaged in extensive on-the-ground fieldwork (Brayne, 2021). Operation LASER’s focus on person-based predictive policing had five goals: (a) extract individuals likely to commit crimes from certain areas, (b) restore peace to communities, (c) remove the anonymity of individuals convicted of gun crimes, (d) remove the anonymity of individuals engaged in gang activity, and (e) reduce gun- and gang-related crime. Operation LASER received federal funding from the Smart Policing Initiative at the Bureau of Justice Assistance (BJA); this initiative supported law enforcement agencies in building “evidence-based, data-driven law enforcement tactics and strategies that are effective, efficient, and economical” (BJA, 2010).
LAPD used a program to aggregate data from disparate sources—including field interview cards, arrest reports, and traffic citations—and generated a list of “chronic offenders.” Each person on the list was assigned a point value according to a formula: five points for a violent crime on their record, five points for known and documented gang affiliation, five points for a prior arrest with a handgun, and five points for being on parole or probation. Brayne pointed out that this simple analytical approach is still considered data-driven predictive policing. Based on the original formula, LAPD found that the list contained too many individuals to be useful, so they added one point for each police contact. The chronic offender list was rank ordered, and detailed bulletins were created for individuals with the highest point values. Bulletins were distributed to officers, posted at the sta-
tion, and uploaded to officers’ laptops. Officers were instructed to seek out individuals on the bulletins, stop them, and do field interviews to gather further intelligence, although officers had broad discretion in terms of their specific interactions with the individuals. Brayne noted that adding a point to the risk score each time an individual on the list was stopped by police both raised the score and increased the chance that the individual would be stopped again.
Brayne’s second example was the Strategic Subject List (SSL) in Chicago, also known as the “Heat List.” Based on research showing that gun violence was contagious through social networks, researchers created an algorithm inspired by an epidemiological model of contagion. Data on documented affiliation, criminal records, and police contacts were used as inputs, and the program created a list of individuals considered most likely to commit or be a victim of gun violence. Individuals highest on the list were visited by a police officer, sometimes in conjunction with social service workers. However, similar to Operation LASER, there was notable variation in implementation, a lack of consistent directives, and a great deal of discretion in terms of officer response to predictions.
The third example was an initiative in Pasco County, Florida. As part of a larger initiative, the sheriff’s office used data from arrest histories and other intelligence to generate a list of about 1,000 individuals. These individuals were dubbed “prolific offenders,” and they had, according to the county’s prolific offenders manual, “taken to a career of crime and were not likely to reform.” Deputies were sent to find and interrogate people on the list. This program, said Brayne, was designed not just to reduce crime but also to “reduce bias in policing by using objective data.”
The effectiveness of these programs is largely unknown due to both non-systematic implementation and lack of data, said Brayne. One of the most detailed reviews of a person-based predictive policing program was the Office of Inspector General’s audit of Operation LASER; the audit was conducted in response to concerns voiced by local community activists (Brayne, 2021). The resulting report noted a lack of strong evidence that Operation LASER reduced crime rates and further noted civil rights concerns based on inconsistent enforcement, opacity, and lack of accountability. Operation LASER ended after that audit.
Scott Mourtgos, Salt Lake City Police Department and University of South Carolina, discussed current law enforcement use of person-based predictive policing. He began by noting that the most well-known examples of person-based predictive policing may not be representative. Prominent examples tend to be from large, well-resourced police departments in major
cities (e.g., Los Angeles [LA], Chicago), but the vast majority of police departments are relatively small, with 40% employing fewer than 10 officers and nearly 70% employing fewer than 25 officers (Bureau of Justice Statistics, 2022). Thus, Mourtgos argued that the use of person-based predictive policing is relatively rare compared to place-based predictive policing. In his experience, said Mourtgos, police view person-based predictive policing as a way of getting information about community members who are more at risk of committing crimes. Within this broad scope, police use these analyses in four primary ways.
First, when law enforcement uses punitive attention in implementing person-based policing approaches, the intention is to deter or incapacitate individuals likely to re-offend so they do not commit further crimes. In doing so, law enforcement may engage identified “prolific offenders” through home visits, surveillance, or increased arrests, and work with prosecutors to enhance penalties. Mourtgos noted that this approach is controversial: Some argue that it punishes people for things they have not done yet, while others say that it is a more strategic and focused use of police resources.
A second approach is geared toward making longer-term predictions, said Mourtgos. For example, one police agency used data on criminal activity, social networks, education, and other information to predict whether youths were at risk of becoming prolific offenders as they continued into adulthood. Mourtgos said that this use of person-based predictive policing is rare.
The third way police departments use person-based predictive policing is by generating threat scores from data on past incidents and other information. If an officer is preparing to engage with an individual who has a high risk score, the officer might take extra safety precautions. However, these extra precautions could potentially lead to a negative outcome, Mourtgos said. For example, an officer may be more likely to use force when interacting with an individual with a higher risk score, whereas the officer may not have done so in the absence of this knowledge. Furthermore, legal implications could result if an officer relies on a risk score as part of the reasonable suspicion analysis. Many open questions remain, said Mourtgos, with very little guidance for police decision makers trying to navigate this field.
Finally, predictions made by algorithms associated with person-based predictive policing can also be used for a largely non-law-enforcement intervention known as focused deterrence (see Chapter 4 for more on focused deterrence, which is generally not considered a type of predictive policing). Mourtgos explained that focused deterrence has generally received greater support than person-based predictive policing approaches because it is not an enforcement-only intervention. He also noted that focused deterrence programs require support from and coordination with multiple agencies and partners.
Mourtgos asked rhetorically how informal predictive policing—in which officers make informal predictions based on prior experiences with individuals—differs from more formal versions (e.g., LASER, SSL) and which type might be preferable. Do we want a system in which predictions are made “in people’s heads,” he asked, or a transparent system in which the algorithm and input data can be examined?
In closing, Mourtgos shared findings from a large, representative survey of members of the public who were asked about algorithmic policing (Schiff et al., 2023). In short, he said, people support algorithmic policing and crime prediction. Support is much higher for local agencies engaging in this type of policing than for the federal government. Researchers originally hypothesized that people would be more supportive of agencies using predictive tools internally (e.g., to identify officer misconduct) than externally, for predicting crime. However, the survey found no difference; the public supported both. In policy conversations about person-based predictive policing, public opinion is a critical but poorly understood component, said Mourtgos. As technology and artificial intelligence (AI) rapidly expand, understanding public opinion regarding whether and how to use these tools will become increasingly important.
Public and policy discourse has been observed around person-based predictive policing, but it has limited operational presence in the real world, said John Hollywood of RAND. While some applications of person-based predictive policing exist, place-based predictive policing is far more common, he noted. As Brayne described, person-based predictive policing consists of using an algorithm on a wide variety of data to make statistical predictions about the risk of an individual committing a crime or becoming a victim of a crime. This process is often a “black box” model, in which it is difficult to examine how the program calculated the outputs, he said. The European Union has banned person-based predictive policing. The three programs that Brayne discussed—Operation LASER, Chicago’s SSL, and Pasco County’s prolific offender list—are the most high-profile examples of person-based predictive policing. However, Hollywood noted that Operation LASER and Pasco County used point scoring combined with subjective assessment; it is debatable whether these are “true” prediction technologies.
Hollywood shared additional information about the Chicago Police Department’s (CPD’s) experience with the SSL and the results of an evaluation of that program. CPD’s experiment with predictive policing began with efforts to develop hot-spot maps that were more meaningful than traditional spatial density methods, said Hollywood. As part of this effort,
Chicago police became aware of research showing that if an individual was part of certain social networks—particularly if that individual was arrested along with an eventual homicide victim—the individual was more likely to become a homicide victim. A regression model was used to predict individuals’ risk based on their history of co-arrests, said Hollywood. Individuals with top-ranking scores were put on a spreadsheet; these 426 individuals were approximately 500 times more likely to become homicide victims than the average Chicago resident. Hollywood noted that the list was generated once and never updated during the program. Over the next year, there were 405 homicide victims in Chicago. One in five victims had one or more co-arrest links with another homicide victim, but only three victims were on the SSL. Learning from this experience, CPD attempted a second version of person-based predictive policing called the Crime and Victimization Risk Model. This program used a two-stage analysis, said Hollywood. In the first stage, linear regression was used to assign a point value to individuals based on data including age at latest arrest, incidents as victim of violence, arrests for violence or weapons use, narcotics arrests, and gang affiliation. In the second stage, this value was adjusted based on how many co-arrest links the individual had. Individuals were placed into five risk tiers; individuals in the highest-risk category were predicted to have a 27–35% chance of being either a shooting victim or a shooter over the next 18 months. Hollywood noted that because of the low clearance rate for violent crime in Chicago, the SSL was more a list of potential victims than potential shooters.
Experiences with these programs, said Hollywood, led to two major findings. First, the predictive models were operationally unsuitable for several reasons. Risk level scores did not provide sufficient information to guide interventions, mainly because the reasons a person is at risk for being shot vary widely. Although violence often accompanies a time-critical situation (e.g., an escalating personal dispute), the models were not based on real-time information and the analysis was not updated. Finally, said Hollywood, CPD never validated the models or integrated the predictions into their operations. Minimal guidance was provided on specific interventions, and officers tended to take two different approaches: Some attempted to contact everyone on the SSL to inform them that they were on the list, while others stopped individuals on the SSL when they saw them on the street.
The second major finding was that these programs were misunderstood and contributed to unnecessary public fear. Hollywood noted widespread concerns about computer-assisted discrimination and about the programs becoming a “Minority Report”-type system. In addition, the SSL was widely perceived as a “bad guy list,” despite being primarily a victim-prediction list. Furthermore, predictions were weak—even an individual in the highest risk category was statistically unlikely to become a shooter or victim. Poor
communication about the program and lack of clear guidance for police interventions led to confusion about the use of the list, said Hollywood.
In an earlier presentation, Jerry Ratcliffe briefly described his previous work in Philadelphia, which addressed whether a person-based predictive policing algorithm could outperform individual law enforcement officials’ efforts to predict individuals at high risk of committing violent crimes (Ratcliffe, 2019). This work was completed in collaboration with George Kikuchi of the Philadelphia Police Department. Ratcliffe and Kikuchi used a harm-focused algorithm to identify individuals at high risk for committing harm in high-crime police districts in Philadelphia, Ratcliffe explained (Ratcliffe & Kikuchi, 2019). They then asked law enforcement officers on the gun violence task force and a group of police analysts to identify the most high-risk individuals in their districts. Ratcliffe and Kikuchi found that the harm-focused algorithm significantly outperformed the best estimates of both law enforcement officials and analysts.
In the question-and-answer session, moderator Andrew Ferguson asked Hollywood to comment on whether evidence indicates that person-based predictive policing is effective at scale. Hollywood replied that a major problem with the Chicago and LA programs was an absence of operational guidance. Officers had wide discretion regarding how to use information generated by person-based predictive policing technologies, and some chose to use the intelligence in ways that were ineffective or that actively undermined community trust, including harassing community members. There are examples of how programs have gone wrong, he said, but little to no evidence about programs that have gone well. Brayne added that measuring the effectiveness of these programs is dependent on the definition of success. When considering guidelines or frameworks for evaluating these tools, it is important to carefully consider how success will be defined and measured, said Brayne.
Hollywood offered ideas to explain the rarity of person-based predictive policing. First, punishing people for something they have not done incurs ethical, legal, reputational, and institutional risks. Even if punishment is not the intent, this tends to be the “gut reaction” of the public, so person-based predictive policing programs engender considerable fear. He noted that courts use risk assessment tools for pre-trial, probation, and parole, and these do not tend to be as controversial as person-based predictive policing. He proposed two possible reasons for this difference. First, the court’s tools can be framed as providing opportunities for reduced supervision or punishment rather than an increase in supervision or punishment. Second, the job of the criminal legal system involves making decisions
about issues like bail and sentencing. Risk assessment tools offer a way to make these decisions more objectively. Another reason that Hollywood posited for the rarity of person-based predictive policing was that, even if models are statistically valid, predictions are frequently too weak and too general to inform actions.
If person-based predictive policing is problematic and rare, asked Hollywood, what could be done instead? A significant body of research describes effective policing practices, he said, and data-driven approaches could be used to grow this evidence base and improve such practices. Instead of analyzing person-specific data, Hollywood encouraged decision makers to collect and analyze data on how certain actions by the police and other key actors impact community safety, to help develop best practices and scale up promising interventions.
Ferguson shared information about the operation of the Chicago and LA person-based predictive policing programs and why they ultimately ended. He was explicit in his evaluation of the history of person-based predictive policing: “It failed in design, it failed in practice, it weakened police departments, it harmed communities, it wasted taxpayer money, [and] it impacted real people.” Ferguson also emphasized that community concerns and community organizing played essential roles in the cessation of person-based predictive policing programs in Chicago and LA.
Ferguson focused on three main aspects of person-based predictive policing programs: inputs, interventions, and impacts. Inputs for the Chicago SSL comprised a variety of data on prior arrests, victimization, and affiliations. Counting arrests and victimization to predict risk is problematic, said Ferguson, because arrests represent interceptions of suspicion that have not gone through the criminal legal system, and victimization does not focus on perpetration. As Hollywood noted, the SSL was not updated at any point during the program’s operation. This means that SSL scores and their underlying data were static for almost two and a half years, Ferguson explained. Choosing appropriate inputs for a system is critically important, said Ferguson, because inputs can reify structural inequities, racial bias, and police power imbalances. Using arrests and other encounters with the police as inputs compounds the power of the police, he noted, and these types of inputs may be irrelevant when used to predict the likelihood of becoming a victim of violence. In addition, the algorithm used to generate the SSL was never proven accurate or effective, said Ferguson, and a report by the Office of Inspector General in Chicago found that risk scores and tiers were unreliable, both because they were not updated regularly and because underlying data were of poor quality (Office of Inspector General, 2020).
In addition to issues with the inputs, interventions in the Chicago program were also problematic, Ferguson suggested. Intended interventions were never explained to officers; the program made officers suspicious of individuals but did not give them clear guidance on how a predicted crime could be prevented. A small number of official interventions were used by CPD, including the Custom Notification Program, designed to identify individuals and connect them to services, and the Targeted Repeat-Offender Apprehension and Prosecution (TRAP) program, designed to enhance prosecution and lengthen incarceration periods.
Ferguson emphasized that placement on the SSL had real-life impacts on individuals and communities. Under the TRAP program, an individual might spend more time in jail because of their presence on the list. Ferguson expressed concern that officers might have treated people on the list differently than they would have otherwise, potentially resulting in unnecessary use of force or an unconstitutional search. Use of the TRAP program ended in 2019, Ferguson noted.
Shifting to LA, Ferguson argued that the chronic offenders list developed by Operation LASER was based on inputs that had little grounding in social science. The design of the inputs meant that each “quality police contact” an individual had added another point to their risk score. This created a feedback loop in which an officer sought to contact an individual because they were high risk, and this contact increased the individual’s risk score and increased their likelihood of being contacted again. Not only was the original list design problematic, said Ferguson, but LAPD did not always follow the design as planned. An Office of Inspector General report found that some individuals on the list were placed there by informal referral and had no points at all, that points were inconsistently assigned, and that a sizable number of “chronic offenders” had no arrests for violent or gun-related crimes (Office of Inspector General, 2019).
As with the TRAP program, once the Operation LASER list was created, officers were not given clear instructions regarding which interventions to undertake. A list of recommended interventions was provided, which included sending a letter to the individual; warrant, parole, and probation checks; and conducting door knocks to advise the individual about available programs and services. In practice, the main “intervention” was increased surveillance. Additionally, officers were told that they could stop and interview individuals solely because of their presence on the list; but Ferguson noted that this type of stop is unconstitutional unless the officer has reasonable suspicion of a crime. In addition to directly impacting the individuals on these lists, Ferguson argued that the LASER program negatively impacted public trust.
Moving forward, Ferguson argued, before person-based predictive policing is adopted, it is important to assess inputs, interventions, and impacts for
their effectiveness in reducing criminal activity. Ferguson offered suggestions for assessing the inputs, interventions, and impacts of future programs:
Using data and algorithms to investigate risk is not inherently problematic, Ferguson said, noting that attempts to determine which structural inequities and other factors lead to some individuals having a higher risk of involvement in shootings is a worthwhile cause. However, police involvement may be counterproductive, particularly in communities that historically mistrust the police. Ferguson emphasized that the risk identification element of predictive policing tools could be used by other organizations to provide resources and address underlying risk factors. There is an opportunity cost, he said, of funding police to do this work rather than diverting funds to trusted community members who are more likely to have the appropriate training, knowledge, and community influence.
Thomas Abt reviewed the state of the science on focused deterrence, which he described as “one of the most impactful anti-violence strategies in the country today.” Abt and Mourtgos noted that focused deterrence is related to but not generally understood to be a form of predictive policing, and it is typically not subject to the same critiques. Focused deterrence requires partnerships involving law enforcement, service providers, and impacted community residents who have come together to stop violence in the community. The partnership works to identify both individuals and groups at high risk of victimization and/or offending. Partners communicate directly with high-risk individuals and convey the commitment of the police and the community to stop violence. If the high-risk individual is willing, the partnership helps deliver specialized support and services. If the indi-
vidual persists in behavior that threatens the community, Abt said, targeted law enforcement sanctions are deployed as a last resort.
Abt compared focused deterrence to some of the more complex predictive policing programs. Focused deterrence uses limited data sets based on actual criminal conduct rather than varied data sets with both police-generated information and non-police data sources. Abt noted that while some police-generated data can be biased or inaccurate, data on the most serious offenses (e.g., homicide) tend to be fairly accurate. Another difference, said Abt, is that focused deterrence utilizes simple analytic techniques rather than complicated algorithms. Unlike predictive policing, focused deterrence involves partnerships among multiple community stakeholders. Commercial vendors are rare in the focused deterrence space, said Abt, and the work is generally conducted by non-profit agencies or academics. Finally, said Abt, predictive policing has a mixed record of success, while focused deterrence has strong evidence of effectiveness. A systematic review of focused deterrence examined 24 quasi-experimental tests of the approach (Braga et al., 2018). Nineteen found “noteworthy” reductions in crime, and the large effect size (.657, Cohen’s d) indicated a large, statistically significant impact. Numerous examples of focused deterrence programs exist, including Boston Ceasefire,2 Oakland Ceasefire,3 and Operation Longevity,4 noted Abt.
Based on lessons learned from focused deterrence programs, Abt offered several considerations for the potential use of person-based predictive policing. First, he said, greater conceptual and definitional clarity could help prevent policymaker, practitioner, and public confusion. The term “predictive policing” is outdated and not well regarded, he explained. Referring to all data-driven policing as “predictive policing” could undermine evidence-based programs (e.g., hot-spot policing). Such “broad-brush labels,” Abt said, may contribute to avoidance of effective programs like focused deterrence or hot-spot policing.
Abt cautioned data-driven approaches against using large, potentially unreliable data sets without strong justification and controls. Furthermore, transparency is essential when advanced analytic techniques are used, and it is critical to rigorously evaluate algorithms or techniques before expanding their use. It is also important for police departments to partner with communities, service providers, civil rights groups, and other decision makers and impacted groups when using advanced data-driven technologies. Abt encouraged police decision makers to look to the science, as substantial evidence illustrates best and worst practices within policing.
___________________
2 https://police.boston.gov/operation-ceasefire/
3 https://www.oaklandca.gov/topics/oaklands-ceasefire-strategy
Technology is continually advancing, both within and outside policing, Abt emphasized. There is no choice but to continually modernize and adopt new technologies, he said, but care must be taken. Abt said that, in general, if a technology narrows the scope of governmental intrusion, it should be welcomed, but more scrutiny is needed if a technology expands the scope of government intrusion. It can be argued, he said, that predictive approaches such as hot-spot policing and focused deterrence narrow the scope of government intrusion. As technology and other innovations advance, a framework with general principles could help guide decision making within the criminal legal system. Instead of dealing with novel issues one at a time, a cumulative knowledge-development process and general “rules of the road” could allow users to navigate emerging ethical and legal challenges based on consistent principles. Abt stressed the urgent need for such a general framework, given the rapid expansion of AI and its potential uses in law enforcement.