A renewed wave of debate over predictive policing technologies is sweeping through criminology departments, civil rights organisations, and police agencies, as researchers warn that algorithmic crime forecasting tools are entrenching racial bias and eroding due process protections in ways that traditional policing oversight has struggled to address. Recent investigations into systems used across the United States and Europe have reignited questions about whether data-driven law enforcement can ever be made fair — or whether the entire approach should be abandoned.
The Resurgence of an Old Controversy
Predictive policing — the use of historical crime data, machine-learning models, and geospatial analytics to forecast where crimes are likely to occur or who is likely to commit them — first gained traction in American police departments in the early 2010s. Tools like PredPol (now Geolitica) and Palantir’s Gotham promised to make policing more efficient by directing patrols to “hot spots” before crimes happened. But as a growing body of criminological research has shown, these systems frequently inherit and amplify the biases embedded in the arrest records used to train them.
A widely cited investigation by [The Markup](https://themarkup.org/prediction-bias/2021/12/02/crime-prediction-software-promised-to-be-free-of-biases-new-data-shows-it-perpetuates-them) found that Geolitica’s predictions disproportionately targeted Black and Latino neighbourhoods, even when controlling for actual reported crime rates. Despite the company quietly winding down its predictive product, similar tools have continued to spread, often rebranded as “risk terrain modelling” or “intelligence-led policing platforms.”
What the Latest Research Reveals
New scholarship is sharpening the critique. Sociologists and criminologists studying algorithmic governance argue that predictive policing tools function less as objective forecasters and more as feedback loops: officers sent to predicted hot spots make more arrests there, which in turn reinforces the algorithm’s belief that those neighbourhoods are criminogenic. The result, critics say, is a self-fulfilling prophecy dressed up in the language of mathematical neutrality.
Civil liberties groups have stepped up their campaigns in response. The [Electronic Frontier Foundation](https://www.eff.org/issues/predictive-policing) has called for outright bans on person-based predictive policing, arguing that “place-based” and “person-based” systems alike fail basic tests of transparency, accountability, and constitutional protection. Several U.S. cities — including Santa Cruz, Oakland, and New Orleans — have already banned or sharply curtailed the use of these systems.
The European Dimension
The conversation is not confined to the United States. Under the European Union’s newly enacted [AI Act](https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai), predictive policing tools that profile individuals based on personality traits or past behaviour to assess the likelihood they will commit a crime are now classified as “unacceptable risk” and effectively prohibited. The regulation marks one of the first binding legal frameworks anywhere in the world to draw a hard line against certain categories of algorithmic policing.
European criminologists, however, caution that enforcement remains uncertain. National police forces in Germany, the Netherlands, and the United Kingdom have all experimented with predictive systems in recent years, and the boundary between prohibited “person-based” forecasting and permitted “place-based” analytics is often blurry in practice. Researchers at institutions such as the Max Planck Institute for the Study of Crime, Security and Law have urged greater interdisciplinary scrutiny.
Why This Story Matters
Predictive policing sits at the intersection of several urgent debates in contemporary sociology and criminology: the role of big data in governance, the persistence of structural racism in criminal justice, and the question of whether technological “solutions” can address problems that are fundamentally political. As Professor Ruha Benjamin has argued in her work on what she calls “the New Jim Code,” seemingly neutral technologies routinely encode and accelerate older patterns of discrimination.
For working police officers, predictive tools have also reshaped daily practice. Patrol officers report feeling pressured to generate arrests in algorithmically flagged areas, while community trust — already fragile in many neighbourhoods — has eroded further when residents learn they live in a “predicted” zone. Sociologists studying police culture warn that this dynamic may be undermining the very legitimacy that effective policing requires.
What to Watch Next
Several developments are likely to shape the next phase of the debate. U.S. federal regulators are weighing whether to issue formal guidance on algorithmic policing under existing civil rights statutes, and class-action lawsuits challenging predictive systems on constitutional grounds are working through the courts. In Europe, the first enforcement test cases under the AI Act will set important precedents. And on the academic side, a new generation of computational sociologists is developing auditing methodologies that may finally allow independent researchers to evaluate proprietary policing algorithms in rigorous ways.
Whether predictive policing can be reformed, regulated, or must be abandoned altogether remains an open question — one whose answer will shape both the future of criminology as a discipline and the lived experience of policing for millions of people.
For more in-depth coverage of sociology, criminology, and the science behind today’s biggest social debates, visit and explore related articles at science.wide-ranging.com.


