

Predictive policing is not a neutral tool. It is a technological expression of old power structures, reframed in the language of progress.

By Matthew A. McIntosh
Public Historian
Brewminate
Introduction: A New Frontier in Surveillance
When the promise of technology meets the weight of history, it rarely lands evenly. Predictive policing, a data-driven approach that purports to prevent crime before it occurs, emerged from the language of efficiency and innovation. Proponents frame it as a solution to urban violence, budgetary strain, and officer safety. But beneath its techno-rational surface, predictive policing often replicates and deepens the very inequalities it claims to address.
The practice is built on algorithms fed by past policing data. It sounds neutral. Yet the data itself is not. It reflects generations of over-policing in Black and Brown neighborhoods, broken windows policing, and a long legacy of surveillance rooted in race, class, and geography. Feeding biased data into an algorithm does not cleanse it. It codifies it. And once an algorithm makes a prediction, it tends to become a self-fulfilling prophecy, deploying more officers to the same neighborhoods, gathering more data from the same people, and feeding that data back into the system.
This is not the future of policing. It is the past, refactored.
Originally published by Brewminate, 04.14.2024, under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license.
The Origin Story of Predictive Policing
The idea of preemptive crime prevention is older than the software that now powers it. In the 1990s, “CompStat” brought data analytics into the precincts of New York, offering commanders a way to track and respond to crime trends in near real time. The model was hailed as a success and quickly replicated across major cities. But even then, critics noted that CompStat’s value was contingent on the quality and context of the data being collected—and on the discretion of officers interpreting it.
The move from CompStat to predictive policing came with the rise of big data and machine learning in the early 2000s. Companies like PredPol, now Geolitica, began offering software that could predict where crimes were likely to occur based on historical data. The model, often compared to weather forecasting, would generate “hotspots” for patrols. Officers could then be dispatched to those locations with the intent of deterring crime before it happened.
The promise was alluring. It suggested a more surgical, less biased form of policing. But that assumption rested on the idea that data is objective. It is not.
Data Without Context: Garbage In, Bias Out

Predictive policing models do not predict crime. They predict reported crime. And reported crime is deeply entangled with police presence, community relationships, and the historical patterns of enforcement.
In neighborhoods that have long been over-policed, more arrests mean more data points. More data points mean more hotspots. More hotspots mean more patrols. And more patrols mean more arrests. This cycle entrenches systemic bias under the banner of statistical rigor.
Consider Los Angeles. In 2016, the LAPD partnered with Palantir to identify “chronic offenders” using a combination of predictive analytics and network mapping. The program targeted individuals who had not yet committed a crime but were statistically likely to do so based on past associations or locations. Many of those flagged were young Black and Latino men. Some were placed under constant surveillance. Others faced repeated stops without cause.
In Chicago, the now-defunct Strategic Subject List identified hundreds of individuals as potential shooters or victims based on factors such as social networks, arrests, and gang affiliation. Community members often did not even know they were on the list. There was no way to appeal or correct the data. And the vast majority of those listed, over 90 percent, were Black or Latino.
The Feedback Loop of Inequality
Predictive policing does not exist in a vacuum. It interacts with housing policy, school closures, economic disinvestment, and healthcare access. When an algorithm tells police to patrol the South Side of Chicago more heavily than Lincoln Park, it is not just responding to data. It is responding to the legacy of redlining, segregation, and poverty.
This dynamic creates what sociologist Sarah Brayne calls “systemic entrenchment.” The system doesn’t just reflect inequality. It reproduces it through repeated exposure and intervention. Once flagged, individuals often remain on the radar indefinitely, even if their behavior changes. This has cascading effects on employment, education, and mobility.
Meanwhile, wealthier neighborhoods, often with higher rates of unreported white-collar crime, remain largely untouched by the algorithm. Their invisibility is interpreted as safety.
Transparency Without Accountability

A central challenge in confronting predictive policing is the opacity of the algorithms themselves. Many of the systems used by police departments are proprietary, developed by private companies shielded from public scrutiny. Departments often sign contracts that limit disclosure of how the algorithms work, what data they use, or how they weigh different variables.
This lack of transparency undermines democratic oversight. Communities most affected by these tools have little say in their deployment, design, or evaluation. Even city officials may not fully understand how the predictions are generated. The technology becomes a black box of authority, its decisions unquestioned because they are coded as “objective.”
This is not merely a technical issue. It is a political one. The question is not just whether predictive policing works, but for whom and at what cost.
Alternatives and Resistance
Not all cities have embraced predictive policing, and some that once did have since stepped away. Santa Cruz, California, was the first city to ban predictive policing in 2020, citing racial bias and community distrust. Chicago dismantled its Strategic Subject List the same year. Los Angeles sharply curtailed its use of the LASER program after watchdog reports revealed discriminatory impacts.
Grassroots organizations have played a pivotal role in this shift. Groups like the Stop LAPD Spying Coalition, Data for Black Lives, and the Algorithmic Justice League have pushed for community-centered data practices, public audits, and policy bans on algorithmic surveillance.
Some cities are exploring alternative models rooted in community investment, violence interruption programs, and restorative justice. These approaches focus not on forecasting harm but on addressing the root conditions that allow it to fester.
Originally published by Brewminate, 04.14.2024, under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license.
Conclusion: Rethinking Safety in the Digital Age
Predictive policing is not a neutral tool. It is a technological expression of old power structures, reframed in the language of progress. Its use raises urgent questions about who defines safety, who is watched, and who is believed.
In a moment where data governs more of our lives than ever before, we must resist the seduction of efficiency without equity. An algorithm cannot fix what is broken at the root. And no software, no matter how advanced, can stand in for justice.
To confront the silent algorithm is to demand more than reform. It is to demand that we imagine a world where safety is not policed but built from the ground up, together.
Originally published by Brewminate, 07.16.2025, under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license.