Predictive policing in the United States

In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois. Predictive policing refers to the usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity. Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.

In the United States, the technology has been described in the media as a revolutionary innovation capable of "stopping crime before it starts". However, a RAND Corporation report on implementing predictive policing technology describes its role in more modest terms:


 * Predictive policing methods are not a crystal ball: they cannot foretell the future. They can only identify people and locations at increased risk of crime ... the most effective predictive policing approaches are elements of larger proactive strategies that build strong relationships between police departments and their communities to solve crime problems.

In November 2011, TIME Magazine named predictive policing as one of the 50 best inventions of 2011, using the term "pre-emptive policing".

Methodology
Predictive policing uses data on the times, locations and nature of past crimes, to provide insight to police strategists concerning where, and at what times, police patrols should patrol, or maintain a presence, in order to make the best use of resources or to have the greatest chance of deterring or preventing future crimes. This type of policing detects signals and patterns in crime reports to anticipate if crime will spike, when a shooting may occur, where the next car will be broken into, and who the next crime victim will be. Algorithms are produced by taking into account these factors, which consist of large amounts of data that can be analyzed. The use of algorithms creates a more effective approach that speeds up the process of predictive policing since it can quickly factor in different variables to produce an automated outcome. From the predictions the algorithm generates, they should be coupled with a prevention strategy, which typically sends an officer to the predicted time and place of the crime. The use of automated predictive policing supplies a more accurate and efficient process when looking at future crimes because there is data to back up decisions, rather than just the instincts of police officers. By having police use information from predictive policing, they are able to anticipate the concerns of communities, wisely allocate resources to times and places, and prevent victimization.

Police may also use data accumulated on shootings and the sounds of gunfire to identify locations of shootings. The city of Chicago uses data blended from population mapping crime statistics, and whether to improve monitoring and identify patterns. PredPol, founded in 2012 by a UCLA professor, is one of the market leaders for predictive policing software companies. Its algorithm is formed through an examination of the near-repeat model, which infers that if a crime occurs in a specific location, the properties and land surrounding it are at risk for succeeding crime. This algorithm takes into account crime type, crime location, and the date and time of the crime in order to calculate predictions of future crime occurrences. Another software program that is utilized for predictive policing is operation LASER, which is used in Los Angeles to attempt to reduce gun violence. However, LASER was discontinued in 2019 due to a list of reasons, but specifically because of the inconsistencies when labeling people. Furthermore, some police departments have also discontinued their usage of the program given the racial-biases and ineffective methods associated with it. While the idea behind the predictive policing model is helpful in some ways, it has always had the potential to technologically reiterate social biases, which would inevitably increase the pre-existing patterns of inequality.

The models used are not typically built on any direct assumptions about the data or what might cause crime. This is with the intent of removing human judgement and the opportunity for bias that comes with it from the equation however bias within the model may be unavoidable if the data used to build the models is itself biased as predictive models are only able to replicate patterns found in existing data. Furthermore, while many models avoid using race, gender, location, or other sensitive and potentially biasing variables, it is extremely difficult to eliminate all proxies for such variables due to correlations between them and much of the other data available to law enforcement which is used by the models.

History
Attempting to predict crimes within police departments can first be traced back to work conducted by the Chicago School of Sociology on parole recidivism in the 1920s. Involved in this process was sociologist Ernest Burgess, who used the research to craft the actuarial approach. The approach works to find and weigh certain factors that correlate with the prediction of future crime. Soon this spread into various parts of the justice system, leading to the creation of prediction instruments such as the Rapid Risk Assessment for Sexual Offense Recidivism (RRASOR) and the Violence Risk Appraisal Guide (VRAG).

In 2008, Police Chief William Bratton at the Los Angeles Police Department began working with the acting directors of the Bureau of Justice Assistance and the National Institute of Justice to explore the concept of predictive policing in crime prevention. In 2010, researchers proposed that it was possible to predict certain crimes, much like scientists forecast earthquake aftershocks.

In 2009, the National Institute of Justice held its first predictive policing symposium. At the event, Kristina Rose, its acting director, claimed that the Shreveport, Los Angeles, D.C. Metropolitan, New York, Chicago, and Boston Police Departments were interested in implementing a predictive policing program. Today, predictive policing programs are currently used by the police departments in several U.S. states such as California, Washington, South Carolina, Arizona, Tennessee, New York and Illinois.

From 2012, NOPD started a secretive collaboration with Palantir Technologies in the field of predictive policing. According to the words of James Carville, he was impetus of this project and "[n]o one in New Orleans even knows about this".

In 2020 the Fourth Circuit Court of Appeals handed down a decision which found predictive policing to be a law-enforcement tool that amounted to nothing more than reinforcement of a racist status quo. The court also held that to grant the government exigent circumstances exemption in this case would be a broad rebuke to the landmark Terry v. Ohio case which set the standard for unlawful search and seizure. Predictive policing, which is typically applied to so-called 'High crime areas' – "relies on biased input to make biased decisions about where police should focus their proactive efforts", and without it police are still able to fight crime adequately in minority communities.

Effectiveness
The effectiveness of predictive policing has been tested through multiple studies with varying findings. In 2015, the New York Times published an article that analyzed predictive policing's effectiveness, citing numerous studies and explaining their results.

A study conducted by the RAND Corporation found that there was no statistical evidence that crime was reduced when predictive policing was implemented. The study cites that predictive policing is only half of the effectiveness. Carefully executed human action is the second half of its effectiveness. This prediction and execution is highly dependent on the reliability of the input of the data. If the data is unreliable the effectiveness of predictive policing can be disputed.

Another study conducted by the Los Angeles Police Department in 2010, found its accuracy to be twice that of its current practices. In Santa Cruz, California, the implementation of predictive policing over a six-month period resulted in a 19 percent drop in the number of burglaries. In Kent, 8.5 percent of all street crime occurred in locations predicted by PredPol, beating the 5 percent from police analysts.

A study from the Max Planck Institute for Foreign and International Criminal Law in an evaluation of a three-year pilot of the Precobs (pre crime observation system) software said no definite statements can be made about the efficacy of the software. The three-year pilot project will enter a second phase in 2018.

According to the RAND Corporation study, the quality of data used for predictive policing can be severely insufficient if data censoring, systematic bias, and relevance is deficient. Data censoring is the implementation of data that omits crime in certain areas. Systematic bias can result when data is collected that shows a certain number of crimes, but does not sufficiently report when the crimes took place. Relevance is the usefulness of data that drives predictive policing.

Documentation of these deficiencies have been reported to cause ineffective and discriminatory policing. One specific data collection reported on the "Disproportionate Risks of Driving While Black". This report showed that black drivers were significantly more likely to be stopped and searched while driving. These biases can be fed into the algorithms used to implement predictive policing and lead to higher levels of racial profiling and disproportionate arrests.

According to the RAND study, the effectiveness of predictive policing requires and depends on the input of data that is high in quality and quantity. Without thoroughly sufficient data, predictive policing results in negative and inaccurate outcomes. Furthermore, it is also cited that predictive policing is inaccurately referred to as the "end of crime". However, the effectiveness of predictive policing depends fundamentally on the tangible action taken based on predictions.

A 2014 report on the use of risk assessment models used to assist with determining conditions of parole found that risk assessments were very effective at reducing rates of recidivism and argues that banning such models would not be an effective solution to the problem of racial disparities in the criminal justice system but would shift the issue back to biased human decision making.

A 2013 report on predictive policing found that much simpler models relying on basic crime statistics have often performed comparably well to more complex models without the drawback of being difficult to interpret and evaluate making them potentially a more reliable and trustworthy alternative.

Independent evaluation of predictive policing experiments in Chicago, Illinois and Shreveport, Louisiana found neither program to have any statistically significant impact on crime. The Chicago experiment was, however, found to increase the arrest rate for targeted individuals despite having no difference in likelihood of involvement in crime. The Shreveport experiment was found to reduce law enforcement spending by six to ten percent compared to groups not part of the program and some officers reported improved community relations as a result of the program.

Hot spot policing strategy
A particular method of predictive policing called hot spot policing has had a positive effect on crime. Evidence provided by the National Institute of Justice shows that this method has decreased the frequency of multiple, violent, and drug and alcohol offenses among others. However, without careful execution and sufficient data implementation this method can perpetuate implicit bias and racial profiling.

Criticisms
Criticisms of predictive policing often focus on ethical concerns surrounding the opacity of complex algorithms limiting the ability to assess their fairness, potentially biased data sources used to create the models, and constitutional rights of individuals to due process. Many algorithms used by law enforcement are purchased from private companies that keep the details of their workings hidden as trade secrets. This limits the public’s access when attempting to evaluate potential bias in the predictive models used. Additionally, predicting locations and individuals associated with crime is seen by some as fundamentally unconstitutional who argue that it is contrary to the principle that everyone is presumed innocent until proven guilty.

A coalition of civil rights groups, including the American Civil Liberties Union and the Electronic Frontier Foundation issued a statement criticizing the tendency of predictive policing to proliferate racial profiling. The ACLU's Ezekiel Edwards argues that such software is more accurate at predicting policing practices than it is in predicting crimes.

Some recent research is also critical of predictive policing. Kristian Lum and Isaac William have examined the consequences of training such systems with biased datasets in 'To predict and serve?'. Saunders, Hunt and Hollywood demonstrate that the statistical significance of the predictions in practice verge on being negligible.

In a comparison of methods of predictive policing and their pitfalls Logan Koepke comes to the conclusion that it is not yet the future of policing but 'just the policing status quo, cast in a new name'.

In a testimony made to the NYC Automated Decision Systems Task Force, Janai Nelson, of the NAACP Legal Defense and Educational Fund, urged NYC to ban the use of data derived from discriminatory or biased enforcement policies. She also called for NYC to commit to full transparency on how the NYPD uses automated decision  systems, as well as how they operate.

According to an article in Significance, 'the algorithms were behaving exactly as expected – they reproduced the patterns in the data used to train them' and that 'even the best machine learning algorithms trained on police data will reproduce the patterns and unknown biases in police data'.

In 2020, following protests against police brutality, a group of mathematicians published a letter in Notices of the American Mathematical Society urging colleagues to stop work on predictive policing. Over 1,500 other mathematicians joined the proposed boycott.

Some applications of predictive policing have targeted minority neighborhoods and lack feedback loops.

Cities throughout the United States are enacting legislation to restrict the use of predictive policing technologies and other "invasive" intelligence-gathering techniques within their jurisdictions.

Following the introduction of predictive policing as a crime reduction strategy, via the results of an algorithm created through the use of the software PredPol, the city of Santa Cruz, California, experienced a decline in the number of burglaries reaching almost 20% in the first six months the program was in place. Despite this, in late June 2020 in the aftermath of the murder of George Floyd in Minneapolis, Minnesota, along with a growing call for increased accountability amongst police departments, the Santa Cruz City Council voted in favor of a complete ban on the use of predictive policing technology.

Accompanying the ban on predictive policing, was a similar prohibition of facial recognition technology. Facial recognition technology has been criticized for its reduced accuracy on darker skin tones – which can contribute to cases of mistaken identity and potentially, wrongful convictions.

In 2019, Michael Oliver, of Detroit, Michigan, was wrongfully accused of larceny when his face registered as a "match" in the DataWorks Plus software to the suspect identified in a video taken by the victim of the alleged crime. Oliver spent months going to court arguing for his innocence – and once the judge supervising the case viewed the video footage of the crime, it was clear that Oliver was not the perpetrator. In fact, the perpetrator and Oliver did not resemble each other at all – except for the fact that they are both African-American which makes it more likely that the facial recognition technology will make an identification error.

With regards to predictive policing technology, the mayor of Santa Cruz, Justin Cummings, is quoted as saying, "this is something that targets people who are like me," referencing the patterns of racial bias and discrimination that predictive policing can continue rather than stop.

For example, as Dorothy Roberts explains in her academic journal article, Digitizing the Carceral State, the data entered into predictive policing algorithms to predict where crimes will occur or who is likely to commit criminal activity, tends to contain information that has been affected by racism. For example, the inclusion of arrest or incarceration history, neighborhood of residence, level of education, membership in gangs or organized crime groups, 911 call records, among other features, can produce algorithms that suggest the over-policing of minority or low-income communities.

A 2014 report argues that the principle of using past behavior to assess future risk is itself fair, but that the existing records used are not representative of actual past behavior. For example, historical rates of marijuana use are generally consistent across racial lines but there are significant disparities in arrest rates for marijuana possession offenses indicative of unequal enforcement by police which has led to minority groups having significantly more criminal records. By having one group be overrepresented in historical arrest data, any models then trained on that data will be biased to consider members of that group to be at a higher risk of committing crimes in the future.

Faced with privacy concerns of citizens in response to the threat of governmental monitoring and automated surveillance by law enforcement, Maine passed a law in 2021 prohibiting facial recognition’s use by the government in most cases and only provides exceptions in a limited set of serious cases such as identifying missing persons.

NYPD’s Patternizr model was created to streamline the work of crime analysts and investigators in identifying strings of crimes as being related to one another and potentially committed by a single perpetrator. The NYPD argues that the model’s ability to rapidly detect patterns in crime has led to the quick and correct identification of several serial offenders. Critics of the program argue that it is unfair to citizens, based on unproven social science, and could lead to false confessions and imprisonment of innocent individuals who are flagged by the program and feel they have no choice but to accept a plea deal.