Building alternative AI futures
PLUS: Your last chance for a free, signed copy of Elise Hu's new book
Hi, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. In September:
I wrote about unrealistic beauty standards and how AI might alter them and us.
I updated and added to the Untangled Primer, a special issue that serves as a nice li’l framework for making sense of our relationship to technology — and taking daily actions to change it.
I interviewed NPR’s Elise Hu about pretty privilege, appearance labor, and worthiness. For the first three people to sign up for an annual paid subscription to Untangled, Elise is generously (!) offering a free, signed copy of her book. So you should probably do that right now.
I write about a lot of big, systemic problems, which I realize can be a bit of a bummer. But I wholeheartedly believe that it's in untangling these problems -- in separating the technical elements from social systems like race, gender, and power -- that we can create radically different futures. It's in the untangling that inevitable seeming technological trends are broken down into a series of choices — in how we understand a given problem, in how we think of technology, and in whose experience and expertise we value. It's in these choices that we can chart alternative futures. That’s what I try to do in this essay about so-called ‘predictive policing’ algorithms. This essay was featured in AI Untangled, my first Tiny Book, which is free for paid subscribers. Check out the table of contents here:
Now on to the show!
August 9th marked the nine-year anniversary of when police officer Darren Wilson shot and killed Michael Brown, an unarmed black teenager, in Ferguson, Missouri. Two years after this event, the St. Louis County Police Department adopted the use of HunchLab, a predictive policing system. According to a piece by the Marshall Project, the officers believed the data would “help officers police better and more objectively […] By identifying and aggressively (emphasis mine) patrolling ‘hot spots’ as determined by the software,” the article states, “the police wanted to deter crime before it ever happened.” This continued belief in ‘predictive algorithms’ represents a future that chases the past under the guise of ‘data-driven’ tools.
It feels as though there is a contingent of influential decision-makers who are bound to the idea that ‘predictive algorithms’ actually predict the future. But they don’t: predictive algorithms are rooted in historical data, and can only offer a future that is still tethered to that data. There’s no way we can construct alternative futures out of algorithms trained on the past. Purveyors of predictive algorithms appear to believe more in the past, than in their ability to shape an alternative future.
Nowhere is this dynamic clearer than in policing in the United States. And, in examining predictive policing, we can also uncover the seeds of radically different futures. Let’s dig in, and incorporate lessons from the Untangled Primer along the way, shall we?
‘Predictive policing algorithms’ are spread across over 150 police departments in the US, and have been around since the 1990’s. These systems promise to forecast ‘criminal activity’, and determine where officers should go, and whom they should police. But the training data used for these systems doesn’t actually provide useful insights for those things. First of all, these tools are often trained on arrest data, which means that racially biased police actions directly inform the algorithmic system. Moreover, it’s unclear how some people are considered high-risk at all. For example, the Chicago Police Department developed something called the ‘Strategic Subject List’ to algorithmically predict the likelihood that an individual is at risk of becoming a victim or an offender in a shooting or homicide. But an evaluation of the tool found that more than one-third of individuals on the list have never been arrested or a victim of a crime, and almost 70% of that cohort received a high-risk score. In other words, in an attempt to ‘predict risk,’ the tool actually manufactured it by encouraging an encounter with the police.
So police departments are never exclusively responding to potential crimes, they’re contributing to the production of crime too. This points to the idea that data and technologies often say more about organizations and companies — their organizational structures, interests, and accomplishments — than they do about us or the technology (the fourth Primer theme). The data aren’t ‘predictive’ at all — they’re descriptive and diagnostic of the practices and behaviors of police departments.
Therefore, step one of imagining an alternative future is reconsidering what data can tell us. If they aren’t predictive but diagnostic — then ‘predictive policing’ algorithms offer a nice snapshot of over-policed communities and the practices of police departments.
The second step in imagining alternative futures requires accounting for what these so-called predictive tools leave out. See, one thing that’s weird about these tools is that they don’t include data for predominantly white-collar crimes.
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.