How the Netflix show 'Ultimatum' explains complex algorithmic systems.
PLUS: The metaverse is back, baby! Here’s what you need to know.
Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. With the launch of Apple Vision, “the metaverse” is back in the public discourse. This means there is no time like the present to re-read the essay I wrote a year and a half ago — “Let’s imagine an alternative metaverse.” In it, I pointed out that all of the current framings of the metaverse accept the status quo, or risk making it far worse. Here’s me:
So, the sci-fi metaverse promises an escape from a deeply unequal reality. The corporatist metaverse is a scramble for ever more personal data. The crypto metaverse is a land grab for digital goods and, well, digital land. What all of these visions share is an acceptance or entrenchment of the status quo. But we can and should imagine a metaverse that more directly challenges the status quo.
I then tried to imagine such an alternative by assessing the system from the perspective of those excluded from it or harmed by it as a result of their social locations. I think the essay holds up today, so I’ve freed it from the archival paywall for the next two weeks. Give it a read! If you find it interesting, think to yourself ‘Ah, there must be so many paywalled gems I’m missing out on’ and then immediately sign up for the paid edition of Untangled. (That’s 100% how I imagine this happening 🙃)
Now, on to the show.
Ever wonder how the Netflix show Ultimatum — which I’m not at all emotionally invested in — explains the complexity of algorithmic systems? Well, you’re in for a treat!
Ultimatum is about five couples in the same situation: one of the partners has given the other ‘an ultimatum’ — get married or we’re done! But before they decide, they ‘break up’ and for three weeks enter into a ‘trial marriage’ with a participant from one of the other couples. After that, they return to their original partner and live with them for the final three weeks. Then they decide who to marry — or go home alone. Invariably, the initial three-week break isn’t enough for the original couple to transcend their relational dynamics. Who would have thought three weeks isn’t enough time??
I kid, but the point is that when the original couple comes back together, they’re often stuck in their original dynamic, which looks a lot like a feedback loop. They are stuck in what complex systems theorists would call a “reinforcing feedback loop.” Reinforcing feedback loops are engines of growth — they either amplify something positive or accelerate its decline. As Peter Senge, author of The Fifth Discipline, describes:
“Whatever movement occurs is amplified, producing more movement in the same direction. A small action snowballs, with more and more and still more, resembling compounding interest. Some reinforcing processes are ‘vicious cycles’ in which things start off badly and grow worse.”
In short, absent external intervention, the reinforcing feedback loop will just continue to spiral in perpetuity, accelerating change. So when Tiff says something that prompts a reaction from Mildred, which prompts a reaction from Tiff, we’re off to the races. Balancing loops, by stark contrast, seek stability because they are optimized for a goal. Right, speed limits offer a goal — if you’re driving under, you can increase your speed. If you’re driving over, you know to reduce your speed. The actions and reactions balance one another. Or, if Tiff and Mildred both treated their relationship like a separate, third thing that they attached a communications goal to — like, say, respectful disagreement — they might show up differently in the moment.
Okay, so what does all this have to do with algorithms? Well, a great new paper by Dr. Nathan Matias argues that we need to study the “feedback loop of human–algorithm behaviour” in which “multiple actors adapt in response to each other and small changes in one part of the system, such as an algorithm adjustment, can lead to unexpected consequences or even collapse.” Matias is arguing that we can’t just study people or algorithmic systems, we have to study the dynamic, ever-evolving relationship between us and the algorithm. If that’s not an implicit endorsement of Untangled, I don’t know what is. 🙃
See, we study people, understanding the cultures they are steeped in, the values they hold, and how they shape the design and impact of algorithmic systems. We study algorithmic systems — that is if the company is willing to make its data available or participate in an audit— and how minute decisions and weightings of different variables lead to different outcomes. But we rarely study the dynamic between the two — the third thing — which is where we should focus our attention. For example, recommendation algorithms aren’t optimized for hate from the jump. Rather, they’re often optimized for engagement or time-on-platform, and for various reasons, many of us find hateful content engaging, whether we subscribe to the ideas expressed in it or not. But then the longer we stick around and consume that content, the more the algorithm learns to serve that impulse. This leads to a feedback loop, which is what starts to build on itself, leading to truly horrible outcomes. It’s this relational dynamic that we need to study.
So what would that look like, and what might it require?
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.