Beyond ‘minimizing harms’
Algorithmic bias is still a huge problem — but where do those problems start?
Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. If you don’t yet have a New Year’s Resolution, might I suggest, ‘make Untangled go viral?’ Just share this essay with a few friends and consider your work for 2023 done.
In December, I:
Published an essay on identification technologies, classification, systems, and power and then followed it up with an audio essay on the same topic.
Turned the Untangled Primer into an ongoing project and considered a new theme: technologies encode politics.
Published a newsletter on ChatGPT3, and how it’s rather Trumpy.
Two last things before we get into it:
A few of you have reached out to ask how you can support me and all of the work I put into Untangled. Stay tuned for an announcement on that very topic next week!
If this essay prompts a question for you, email me with it this week, and I’ll do my best to answer it in the audio essay.
Now on to the show!
Remember playing the game whac-a-mole as a kid? You hit the mole, it returns to its hole, and then, in time, it pops back up, and on and on you go. That’s sometimes how it feels in tech these days: we identify a harm, take a whack at it, the harm goes away for a time, and then eventually, it returns.
What if, instead, we could alter the systems that allow these harms to flourish? How do we transition from ‘minimizing harms’ to instead, transforming the algorithmic systems that continue propagating those harms? Let’s dig in.
This transition hinges on how we see ‘algorithmic systems’ in the first place. In “Algorithmic reparation,” scholars Jenny L. Davis, Apryl Williams, and Michael Yang critique the field of fair machine learning (FML) and propose an alternative — algorithmic reparation — rooted in an anti-oppressive, intersectional approach. ‘Fair’ machine learning assumes that we live in a meritocratic society; that, as the authors write, unfairness is the result of “fallible human biases on the one hand, and imperfect statistical procedures, on the other.”
But what if we stopped assuming that we live in a meritocratic society, and instead assumed that, as the authors explain, “discrimination is entrenched” and “compounding at the intersections of multiple marginalizations”?
Keep reading with a 7-day free trial
Subscribe to Untangled to keep reading this post and get 7 days of free access to the full post archives.