Problems are made, not found.
📻 PLUS: The audio version of "Beyond minimizing harms," and a question for you.
Welcome back to Untangled, a newsletter and podcast about technology, people, and power. I want to start with a huge note of gratitude to all of you who signed up for the paid edition of Untangled. It warms my wonky heart! This issue is an example of one type of post that will only be available to paid subscribers starting February 1. Don’t want your future self to miss out? You know what to do.
If Untangled leads you to just one better idea, a more informed decision, or a new framework that helps you see this weird tech-mediated world of ours more clearly, then the subscription will have been worth it. Think of it as an investment in yourself!
Or, you can think of it as supporting me in doing something I absolutely love. 😍
Now, on to the show!
Oh, hello
This is the audio version of my latest essay, “Beyond minimizing harms.” It also doubles as a fun jaunt through another primer theme contender: problems are made, not found.
👾 One quick thing before we get into all that: It’s not too late to participate in the official Untangled ChatGPT3 game. Just pop over here, follow the instructions, and I’ll take it from there.
✋Stop, Primer time!
In “Beyond minimizing harms,” I wrote:
To see what I mean, try and locate ‘the problem’ with Amazon’s recruitment algorithm. Where exactly did the problem lie? Was it when an Amazon employee decided to us an anti-classification algorithm? Was it in the data that trained the algorithm? Or is it in the systemic marginalization of women in the tech sector? Or perhaps in the meritocratic belief system that undergirds the idea that an algorithm can be fair?
Once confronted with all these questions, it becomes clear that we aren’t ‘locating’ problems, we’re creating them. The ways in which we choose to understand algorithmic failures is where most of the work lies, because these failures go way beyond the technical. In ‘Seeing Like An Algorithmic Error’, Mike Ananny argues that “to say than algorithm failed or made a mistake is to take a particular view of what exactly has broken down — to reveal what you think an algorithm is, how you think it works, how you think it should work, and how you think it has failed.” In other words, an algorithmic error, failure or breakdown isn’t found or discovered , but made. To some an error is a big problem, while others see no error at all — it’s just the system working as it is intended.
This is similar to the point I made in ‘Accidents are inevitable.’ In describing the collapse of the Celsius Network, an unregulated crypto hedge fund, I argued that we tend to blame ‘humans in the loop’ when something fails. The failure becomes a “moral crumple zone” wherein, according to cultural anthropologist Madeleine Clare Elish, “responsibility for an action may be misattributed to a human actor who had limited control over the behavior of an automated or autonomous system.” In the case of Celsius, this focus on the individual created an overly narrow view of what failed and distracted from the dynamics of the broader sociotechnical system.
How we make mistakes, errors, and failures isn’t all that different from how we make problems. In “Tech for…what, exactly?” I summarized how WorldCoin justified the creation and deployment of an orb loaded with cameras and sensors to collect one’s biometric information. I wrote:
You might be wondering what biometrics have to do with UBI. Well, WorldCoin will argue that proving ‘human uniqueness’ while preserving privacy is an unresolved technical problem and that it’s critical to ensuring that every person has “not claimed their free share of WorldCoin already.” So the data capture part might be less about UBI, and more about fraud prevention. As WorldCoin CEO Alex Blania put it, “our goal is to use the data for the sole purpose of developing our algorithms to minimize fraud and enhance user privacy.”
WorldCoin’s focus on fraud and fairness might sound reasonable, as if unlocking UBI is impossible without first confronting fraud — but it’s not. Rather, it’s a turbocharged example of a ‘tech for good’ mindset and approach that frames the problem in such a way that launders the interests, expertise, and beliefs of technologists.
The frames of ‘fraud’ and ‘fairness’ aren’t all that different from the narrow focus on the human in the loop in the example of Celsius. To Ananny’s point, both reveal how those involved viewed the breakdown of the sociotechnical system, and how they think it should work.
The truth is, we make problems all the time. The frames we adopt, the boundaries we put on a given system, the perspective we bring — all of these things contribute to the construction of a given problem. How these problems are settled — the process from contestation to commonplace understanding and social acceptance— is about power. Power over who gets to decide what’s at stake; over whose solutions are deemed legitimate.
Part of my hope is that Untangled helps align our attention, decisions, and actions — personal and professional! — toward systemic problems. Let’s make progress on the system and/or transform it, rather than continue to whack away at those moles, shall we?
❓So here’s my question: what’s an example of something we often consider a “tech problem” but has more systemic roots? Hit reply and let me know.
👉Last, my invitation to you: I’m interviewing Mike Ananny, USC Professor of Communications & Journalism, about his paper “Seeing like an algorithmic error” for the pod. Got a question you’d like me to ask? Send it my way by the end of the week and I’ll do my best to include it.
Okay, that’s it for now.
Until next time,
Charley