Problems are made, not found.
đ» PLUS: The audio version of "Beyond minimizing harms," and a question for you.
Welcome back to Untangled, a newsletter and podcast about technology, people, and power. I want to start with a huge note of gratitude to all of you who signed up for the paid edition of Untangled. It warms my wonky heart! This issue is an example of one type of post that will only be available to paid subscribers starting February 1. Donât want your future self to miss out? You know what to do.
If Untangled leads you to just one better idea, a more informed decision, or a new framework that helps you see this weird tech-mediated world of ours more clearly, then the subscription will have been worth it. Think of it as an investment in yourself!
Or, you can think of it as supporting me in doing something I absolutely love. đÂ
Now, on to the show!
Oh, hello
This is the audio version of my latest essay, âBeyond minimizing harms.â It also doubles as a fun jaunt through another primer theme contender: problems are made, not found.
đŸ One quick thing before we get into all that: Itâs not too late to participate in the official Untangled ChatGPT3 game. Just pop over here, follow the instructions, and Iâll take it from there.
âStop, Primer time!
In âBeyond minimizing harms,â I wrote:
To see what I mean, try and locate âthe problemâ with Amazonâs recruitment algorithm. Where exactly did the problem lie? Was it when an Amazon employee decided to us an anti-classification algorithm? Was it in the data that trained the algorithm? Or is it in the systemic marginalization of women in the tech sector? Or perhaps in the meritocratic belief system that undergirds the idea that an algorithm can be fair?
Once confronted with all these questions, it becomes clear that we arenât âlocatingâ problems, weâre creating them. The ways in which we choose to understand algorithmic failures is where most of the work lies, because these failures go way beyond the technical. In âSeeing Like An Algorithmic Errorâ, Mike Ananny argues that âto say than algorithm failed or made a mistake is to take a particular view of what exactly has broken down â to reveal what you think an algorithm is, how you think it works, how you think it should work, and how you think it has failed.â In other words, an algorithmic error, failure or breakdown isnât found or discovered , but made. To some an error is a big problem, while others see no error at all â itâs just the system working as it is intended.
This is similar to the point I made in âAccidents are inevitable.â In describing the collapse of the Celsius Network, an unregulated crypto hedge fund, I argued that we tend to blame âhumans in the loopâ when something fails. The failure becomes a âmoral crumple zoneâ wherein, according to cultural anthropologist Madeleine Clare Elish, âresponsibility for an action may be misattributed to a human actor who had limited control over the behavior of an automated or autonomous system.â In the case of Celsius, this focus on the individual created an overly narrow view of what failed and distracted from the dynamics of the broader sociotechnical system.
How we make mistakes, errors, and failures isnât all that different from how we make problems. In âTech forâŠwhat, exactly?â I summarized how WorldCoin justified the creation and deployment of an orb loaded with cameras and sensors to collect oneâs biometric information. I wrote:
You might be wondering what biometrics have to do with UBI. Well, WorldCoin will argue that proving âhuman uniquenessâ while preserving privacy is an unresolved technical problem and that itâs critical to ensuring that every person has ânot claimed their free share of WorldCoin already.â So the data capture part might be less about UBI, and more about fraud prevention. As WorldCoin CEO Alex Blania put it, âour goal is to use the data for the sole purpose of developing our algorithms to minimize fraud and enhance user privacy.â
WorldCoinâs focus on fraud and fairness might sound reasonable, as if unlocking UBI is impossible without first confronting fraud â but itâs not. Rather, itâs a turbocharged example of a âtech for goodâ mindset and approach that frames the problem in such a way that launders the interests, expertise, and beliefs of technologists.
The frames of âfraudâ and âfairnessâ arenât all that different from the narrow focus on the human in the loop in the example of Celsius. To Anannyâs point, both reveal how those involved viewed the breakdown of the sociotechnical system, and how they think it should work.
The truth is, we make problems all the time. The frames we adopt, the boundaries we put on a given system, the perspective we bring â all of these things contribute to the construction of a given problem. How these problems are settled â the process from contestation to commonplace understanding and social acceptanceâ is about power. Power over who gets to decide whatâs at stake; over whose solutions are deemed legitimate.
Part of my hope is that Untangled helps align our attention, decisions, and actions â personal and professional! â toward systemic problems. Letâs make progress on the system and/or transform it, rather than continue to whack away at those moles, shall we?
âSo hereâs my question:Â whatâs an example of something we often consider a âtech problemâ but has more systemic roots? Hit reply and let me know.
đLast, my invitation to you: Iâm interviewing Mike Ananny, USC Professor of Communications & Journalism, about his paper âSeeing like an algorithmic errorâ for the pod. Got a question youâd like me to ask? Send it my way by the end of the week and Iâll do my best to include it.
Okay, thatâs it for now.
Until next time,
Charley