Predictive Police Tech Isn’t Making Communities Safer — It’s Disempowering Them

“The truth is, every time community groups have asked questions about policing, the police haven’t had good answers. And when really pushed, they had to fold to recognize that maybe this technology wasn’t worth the money, wasn’t doing what it was said. And while sure, it sounded good in a soundbite, it sounded good to the city council when you said you had to do something to stop crime, in reality, it wasn’t doing what it said, and may also have had real harms on those communities,” says Andrew Guthrie Ferguson, author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. In this episode of “Movement Memos,” Guthrie Ferguson and host Kelly Hayes explore the history and failures of predictive policing, and raise the alarm about the creation of new data empires.

Music by Son Monarcas & David Celeste

Note: This a rush transcript and has been lightly edited for clarity. Copy may not be in its final form.

Kelly Hayes: Welcome to “Movement Memos,” a Truthout podcast about organizing, solidarity, and the work of making change. I’m your host, writer and organizer Kelly Hayes. This week, we are talking about high-tech policing and how so-called predictive technologies hurt our communities. This episode is a bit of a primer on predictive policing, which I hope will help set us up for deeper conversations about how activists are resisting mass surveillance and other police tech. We’re going to be hearing from Andrew Guthrie Ferguson, a law professor at American University Washington College of Law, who is also the author of The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. Andrew’s book is a great introduction to these technologies, and some of the companies behind them, and it also does an excellent job of explaining how these technologies have already failed us.

These supposed advancements in policing have not only failed to reduce crime, but have actually caused more harm in our communities, and that harm is persistent and cyclical. Because when it comes to police reforms, and Big Tech, big promises frequently lead to massive failures, which often results in more investment, not less. The answer, we are always told, is more money, and even more ambitious initiatives. Rather than learning from past failures or practicing accountability, Big Tech leaders and police reformers hype up the next innovation, and insist that with more money, and bigger technologies, all of their failed thinking will finally add up to success.

In my conversation with Paris Marx last year, we discussed how the automation hype of the 2010s should be considered when parsing the hype around AI. During those years, we were told that automation was going to transform our lives, and that truck drivers and food service workers would all soon be replaced by machines. That, of course, didn’t happen. Now, as AI leaders begin to pivot in their own messaging, after a year of big promises and fear mongering, I think police tech is something that we should all scrutinize in light of the bold promises of Big Tech’s social and economic interventions. Because, the products they let loose upon our communities have real consequences. Safety is not improved and our quality of life is not enhanced, but mass surveillance grows, and the systemic biases that govern our lives become more entrenched. We’re going to talk more about what that looks like, today and in some future episodes, but I want to start by taking these technologies out of the realm of science fiction, which is how I think they exist in many people’s minds. We’re not talking about fictional stories like Minority Report, where future crimes can be predicted in advance. Despite the talking points of tech leaders and some government officials, nothing like that is anywhere on the horizon. So, let’s talk about what does exist, why it doesn’t work, and how communities have rebelled against it.

If you appreciate this episode, and you would like to support “Movement Memos,” you can help sustain our work by subscribing to Truthout’s newsletter or by making a donation at truthout.org. You can also support the show by subscribing to the podcast on Apple or Spotify, or wherever you get your podcasts, or by leaving a positive review on those platforms. Sharing episodes that you find useful with your friends and co-strugglers is also a huge help. As a union shop with the best family and sick leave policies in the industry, we could not do this work without the support of readers and listeners like you, so thanks for believing in us and for all that you do. And with that, I hope you enjoy the show.

(musical interlude)

Andrew Guthrie Ferguson: My name is Andrew Guthrie Ferguson. I’m a law professor at American University Washington College of Law here in Washington, DC. I’m the author of the The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, and I write about issues of surveillance technology, including big data, policing, predictive policing, facial recognition technologies, persistent surveillance systems, the internet of things, and basically all the ways that our government is surveilling us in different ways and some of the ways we’re surveilling ourselves.

I began this career, my law career as a public defender in Washington, DC, trying cases from juvenile cases to homicides, and I teach criminal procedure and evidence, and criminal law here at the law school.

So, I think the best way to think about predictive policing is to divide it up into three categories. Two are almost I would say traditional now, because we’ve been experimenting and largely failing with them for the last decade.

The first is place-based predictive policing, the idea that you can take past crime data, maybe the type of crime, the location, the time of day, and use that information to “predict” where a future crime will be such that you can put a police car in the right place at the right time, and either deter the crime or catch the person who’s committing the alleged act. That’s a theory. There have been many problems which we can talk about. But the theory is that past crime data might be helpful to predict future actions.

And some of the background for where place-based predictive policing came from is that there are certain crimes, burglaries, car thefts, that are almost predictable only in the sense that there’s something about the environment, the environmental risk factors that leads people to commit a series of car thefts there. Maybe it’s a parking lot without lighting or anyone around, or maybe it’s a burglary in a neighborhood that statistically speaking, if there’s a burglary in one neighborhood, there may be more burglaries in this same neighborhood, probably because it’s the same group of people going back to keep trying their hand at the crime.

And so that insight that you could actually take past crime and predict future crime has been adopted by police departments, and largely has proven unhelpful and failing, and we can talk about that. But that’s place-based predictive policing.

Person-based predictive policing is a second type of predictive policing, which basically says we can use risk factors of individuals, maybe that they’d been arrested or convicted, or even they were a victim of violent crime, to predict that they might be involved in criminal activity in the future. This has been tried in Chicago, it’s been tried in Los Angeles, and largely has completely failed. But the idea was that we could take this past crime data and use it to focus police resources to target the group of people who are most at risk of committing crimes.

And we can talk about the failings and the problems with it. But the theory was that in a world of finite resources, police officers could move forward and target with a focused deterrence framework of people they thought were most at risk of crime.

The third just general sense of predictive policing is something we’re seeing now that’s coming to the fore with this rise of video analytics and artificial intelligence, the idea that you might be able to predict using pattern matching certain activities that could be seen as suspicious. So for example, you could train video analytics to recognize something that more or less looks like what could be a robbery. And then as the cameras are rolling, the software is also rolling to be able to see a pattern that reminds the computer of a particular kind of crime and then alert officers to the scene.

We also see it with automated license plate readers. Maybe the travel that you use in your car is consistent with how drug traffickers might travel about their business. And so the algorithm would pick up a prediction that this car that’s been driving this pattern might be involved in criminal activity. Again, a different form of predictive policing. But the idea of all of them is that you can take past crime data, you can run it through an algorithm. And somehow through the miracle of data-driven analysis, predict the future.

As I said, and as we can talk about it, it has largely failed, both place-based predictive policing, person-based predictive policing. And while the jury might be out in terms of some of this AI video analytics, odds are it’s probably going to meet the same ill fate.

KH: Now, as we dive into explanations of these varying technologies and their shortcomings, I want us to remember the divergence between our interests and those of tech companies, the police, and the people and forces that are actually served by policing. I think these words from Mariame Kaba and Andrea Ritchie’s book No More Police might help us along:

Police exist to enforce existing relations of power and property. Period. They may claim to preserve public safety and protect the vulnerable, but police consistently perpetrate violence while failing to create safety for the vast majority of the population, no matter how much money we throw at them. Their actions reflect their core purposes: to preserve racial capitalism, and to manage and disappear its fallout.

Police reforms have a long history in the United States. When the violence of policing results in a destabilization of the social order, due to organizing and protests, we get reforms, such as the professionalization of policing in the last century and the technological solutions of today. Those reforms heap more resources upon police, but do not change the core functions of policing. With those core functions intact, the same problems persist, and we are told still more reform, and thus, more resources are needed. I want us to keep these trends in mind as we think about the directions that predictive policing has taken, and will continue to take in our lives. I also want us to understand why companies that produce failed technologies, such as SoundThinking, continue to receive more investment – because the truth is, they are succeeding at something, even if that something isn’t public safety.

AGF: So place-based predictive policing got its start in Los Angeles. There was an idea that the Los Angeles Police Department could take past crime statistics and be able to reallocate police resources to go to the particular places where the crime would occur.

There was a company then called PredPol that had a contract with the Los Angeles Police Department, that basically sold them this idea that data-driven policing through this algorithm could help them do more with less, be at the right place at the right time, and that they could actually put their police car where they thought, let’s say a burglary would be, a car theft would be, or anything else.

Now, what happened over time, and it took almost a decade of movement activists to push against this kind of technology, was that essentially they could neither show that the predictions were accurate such that they were actually putting the police car at the right place at the right time, and thus reducing crime. And furthermore, they were essentially targeting certain areas that of course correlate with poverty, correlate with economic need and deprivation. And they were spending police resources to focus on the data to go to these particular places, rather than necessarily focusing on why there might be criminal activity in those places in the first place.

And so place-based predictive policing began in Los Angeles, but then spread like wildfire throughout much of the last decade, through jurisdictions all across the country that saw the logic of taking past crime data, which they had, and applying it to the future, without really asking the question of A, does this work? B, will this actually be helpful to officers? And C, is it worth the money? Because of course, there’s opportunity costs in the money you spend on a technology that you’re not investing in a community.

And as we’ve seen over the last decade, that after departments adopted this technology, they could largely not show that it worked, that it lowered the crimes they were worried about. They could show that it was actually targeting poor communities and communities of color in many places. And further, that many times, they didn’t even have buy-in from their own officers who were told to follow the algorithm.

And so this whole sort of management structure that was based on predictive algorithms resulted in a system that was costing money, not lowering crime, not helping out the police, not even wanted by the police, and generating a lot of community outrage that there was this algorithm that was determining police resources in their community.

And of course, as one of the great flaws of most policing in America, the community wasn’t even consulted about whether this was something that they wanted, needed, or thought was a good idea.

And so after many communities learned........

© Truthout