How the Netflix show 'Ultimatum' explains complex algorithmic systems.
PLUS: The metaverse is back, baby! Here’s what you need to know.
Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. With the launch of Apple Vision, “the metaverse” is back in the public discourse. This means there is no time like the present to re-read the essay I wrote a year and a half ago — “Let’s imagine an alternative metaverse.” In it, I pointed out that all of the current framings of the metaverse accept the status quo, or risk making it far worse. Here’s me:
So, the sci-fi metaverse promises an escape from a deeply unequal reality. The corporatist metaverse is a scramble for ever more personal data. The crypto metaverse is a land grab for digital goods and, well, digital land. What all of these visions share is an acceptance or entrenchment of the status quo. But we can and should imagine a metaverse that more directly challenges the status quo.
I then tried to imagine such an alternative by assessing the system from the perspective of those excluded from it or harmed by it as a result of their social locations. I think the essay holds up today, so I’ve freed it from the archival paywall for the next two weeks. Give it a read! If you find it interesting, think to yourself ‘Ah, there must be so many paywalled gems I’m missing out on’ and then immediately sign up for the paid edition of Untangled. (That’s 100% how I imagine this happening 🙃)
Now, on to the show.
Ever wonder how the Netflix show Ultimatum — which I’m not at all emotionally invested in — explains the complexity of algorithmic systems? Well, you’re in for a treat!
Ultimatum is about five couples in the same situation: one of the partners has given the other ‘an ultimatum’ — get married or we’re done! But before they decide, they ‘break up’ and for three weeks enter into a ‘trial marriage’ with a participant from one of the other couples. After that, they return to their original partner and live with them for the final three weeks. Then they decide who to marry — or go home alone. Invariably, the initial three-week break isn’t enough for the original couple to transcend their relational dynamics. Who would have thought three weeks isn’t enough time??
I kid, but the point is that when the original couple comes back together, they’re often stuck in their original dynamic, which looks a lot like a feedback loop. They are stuck in what complex systems theorists would call a “reinforcing feedback loop.” Reinforcing feedback loops are engines of growth — they either amplify something positive or accelerate its decline. As Peter Senge, author of The Fifth Discipline, describes:
“Whatever movement occurs is amplified, producing more movement in the same direction. A small action snowballs, with more and more and still more, resembling compounding interest. Some reinforcing processes are ‘vicious cycles’ in which things start off badly and grow worse.”
In short, absent external intervention, the reinforcing feedback loop will just continue to spiral in perpetuity, accelerating change. So when Tiff says something that prompts a reaction from Mildred, which prompts a reaction from Tiff, we’re off to the races. Balancing loops, by stark contrast, seek stability because they are optimized for a goal. Right, speed limits offer a goal — if you’re driving under, you can increase your speed. If you’re driving over, you know to reduce your speed. The actions and reactions balance one another. Or, if Tiff and Mildred both treated their relationship like a separate, third thing that they attached a communications goal to — like, say, respectful disagreement — they might show up differently in the moment.
Okay, so what does all this have to do with algorithms? Well, a great new paper by Dr. Nathan Matias argues that we need to study the “feedback loop of human–algorithm behaviour” in which “multiple actors adapt in response to each other and small changes in one part of the system, such as an algorithm adjustment, can lead to unexpected consequences or even collapse.” Matias is arguing that we can’t just study people or algorithmic systems, we have to study the dynamic, ever-evolving relationship between us and the algorithm. If that’s not an implicit endorsement of Untangled, I don’t know what is. 🙃
See, we study people, understanding the cultures they are steeped in, the values they hold, and how they shape the design and impact of algorithmic systems. We study algorithmic systems — that is if the company is willing to make its data available or participate in an audit— and how minute decisions and weightings of different variables lead to different outcomes. But we rarely study the dynamic between the two — the third thing — which is where we should focus our attention. For example, recommendation algorithms aren’t optimized for hate from the jump. Rather, they’re often optimized for engagement or time-on-platform, and for various reasons, many of us find hateful content engaging, whether we subscribe to the ideas expressed in it or not. But then the longer we stick around and consume that content, the more the algorithm learns to serve that impulse. This leads to a feedback loop, which is what starts to build on itself, leading to truly horrible outcomes. It’s this relational dynamic that we need to study.
So what would that look like, and what might it require?
Matias lays out four steps for researchers.
Step one is “classify patterns of behavior.” Right, if researchers do not agree to precise descriptions of the behavior or dynamic involving people and algorithms, evidence can’t build on itself.
Step two is “explain how human-algorithm behavior emerges.” Basically, technologists and social scientists have only half of the goods. Technologists can account for why an algorithm took an action but can’t account for how people respond, and the reverse is true of social scientists.
Step 3 is “reliably forecast and intervene.” Once researchers have an explanation for how certain dynamics emerge, they can model it. They can forecast how tweaking the model, for example, might alter the patterned behavior, and then intervene accordingly.
Step 4 is “cultivate hybrids of science and engineering.” This is the most controversial step, which attempts to answer the question I had puzzling in my brain while reading Matias’s paper: is this, uh, possible at all? People are fickle and behave in myriad, unpredictable ways and algorithms adapt all the dang time. But one reason Matias is hopeful is that there are actually a lot of commonalities already. For example, he cites research showing that there are plenty of commonalities across the kind of algorithmic harms and the context in which they are produced. Another reason to believe there might be more commonalities? Homogeneity among the designers! As Matias writes, “One reason algorithms are curiously predictable is that their creators are so similar. Algorithm makers operate in similar legal environments, face similar economic conditions, receive similar training, and use similar data sets.”
📟 Call Backs
Matias’s paper asks us to zoom in on the dynamic between humans and algorithms. That’s both critical, and I’d argue, half the battle. The other half? Zooming out! While reading Matias’s piece, I couldn’t help but think of Mike Ananny’s paper, “Seeing like an algorithmic error,” which essentially asks that we pull back the curtain on the structural forces that might have contributed to the problem in the first place. Right, while Matias would want us to delve into the feedback loop created between Sam and Aussie, Ananny would want us to interrogate how broader systems shape how they are both showing up in the moment. In other words, is the dynamic itself a symptom of something else, something deeper beneath the surface? These approaches aren’t mutually exclusive, and I’m not judging which is better (though I’m 100% making judgments about the relationships on the show!). In fact, I think it’s by combining these two views that we might get see the whole system a li’l clearer.