🤔 Do algorithmic decision-making systems even work?
PLUS: The audio version of ‘Bigger isn’t better; it’s harmful.’
Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. Two announcements before we get into it:
You have less than two weeks left to claim a special offer from me to you: get 20% off Untangled forever.
Be the first to hear when enrollments happen for the Facilitation Leadership Lab. Already, 115 people have filled out the survey and expressed their interest, which raises the important question: what are you waiting for?? 🙃
Now on to the show!
Typically, if a technology doesn’t work as intended, we stop using it. If my keyboard started producing the wrong letters, I’d find another keyboard — I’ve got a newsletter to write! But we don’t apply the same logic to algorithmic decision-making systems, which are a lot more consequential than my li’l newsletter. They can determine who gets a job, who can access a loan, who keeps their healthcare, who gets bail, etc. They continue to make ‘errors’ and then we recklessly give it another go!*
See, we’ve known for a long time that algorithmic decision-making systems regularly discriminate. Every week there are new examples. Just last week I came across two investigations:
The Markup showed that L.A. uses a racially biased algorithmic scoring system to determine who receives subsidized housing.
An investigation into Rotterdam’s welfare system revealed that it used an algorithmic system to flag welfare recipients ‘suspicious’ of committing fraud — the system, it turns out, discriminates based on ethnicity and gender, and the report “also revealed evidence of fundamental flaws that made the system both inaccurate and unfair.”
Against this backdrop, researchers have offered frameworks to help determine when the use of these systems might not be valid. Or you might remember the questions I borrowed from Virginia Eubanks in “Tech for…what, exactly?” to evaluate the legitimate use of such technologies:
Does the technology increase the self-determination and agency of the poor?
Would the technology be tolerated if it was targeted at non-poor people?
While it has been clear for some time that these systems discriminate, recent research suggests something else: these systems don’t work in a more fundamental way!
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.