Bigger isn't better; it's harmful.
Why ‘scale thinking’ is a big problem, and what to do about it.
Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. For the month of March, I’m sharing a special offer: get 20% off Untangled…forever! As they say in the biz, this offer won’t last long, so claim it now.
Did you claim the offer?
Go ahead, I can wait.
Okay, moving on - February was a busy month over here at Untangled HQ:
In “Will ChatGPT replace search?” I wrote about all the problems that might come from treating AI chatbots as a search system.
I spoke with Mike Ananny, associate professor of communications and journalism at the University of Southern California about algorithmic errors: how we make them, what they say about our view of the world, and how we might think of them as public problems that require collective action.
I reflected on TikTok, speculation technologies, and the production of ignorance.
I wrote about the illusion of ChatGPT — what exactly it is, where it comes from, and why it’s a big ol’ problem.
I launched the Facilitation Leadership Lab with my long-time friend and collaborator Kate Krontiris. Fill out this short survey and be the first to hear when enrollments happen.
Now, on to the show!
My dad is an economist and I studied economics in undergrad. Hey look at that, it’s an apple right next to a tree → 🍎 🌲.
The point is, I quickly learned the theory of ‘economies of scale’ or the idea that there are cost advantages to a company becoming ever bigger. The problem with ‘economies of scale’ is that they become self-justifying, leading to a ‘bigger is always better’ mentality. If your next thought is ‘wait, what’s the problem with that??’ well then, you’ve effectively demonstrated my point: our preoccupation with scale is so embedded into our understanding of work, business, and commerce, that we accept it as something which is inherently good. It’s not. This is an essay about the problems of ‘scale thinking’: how it has affected engineering and software development, and a few key concepts that might bring us back from the brink. Let’s dig in.
The unquestioned celebration of scale might feel new, but it’s not. E.F. Schumacher, a German-British economist wrote the book, Small is Beautiful: Economics as if People Mattered, where he lamented, “Today, we suffer from an almost universal idolatry of gigantism. It is, therefore, necessary to insist on the virtues of smallness.” Schumacher was writing in 1973 but he traces the “economics of gigantism and automation” to the 19th century. He wrote,
“The economic calculus, as applied by present-day economics, forces the industrialist to eliminate the human factor because machines do not make mistakes, which people do. Hence the enormous effort at automation and the drive for ever larger units. This means that those who have nothing to sell but their labour remain in the weakest bargaining position […] The economics of gigantism and automation is a leftover of nineteenth century conditions and nineteenth century thinking and it is totally incapable of solving any of the real problems of today.”
Writing around the same time as Schumacher, Ivan Illich, a Catholic priest, philosopher, and social critic, argued that as we consider whether our relationship to technology might be out of whack, “it is possible to identify a natural scale.” He continues, “when an enterprise grows beyond a certain point on this scale, it first frustrates the end for which it was originally designed, and then rapidly becomes a threat to society itself.” You can think of scale like a teeter-totter. It works just great when it is in balance with the original intent. But when it goes too far in one direction, there are problems galore!
Schumacher and Illich would be real bummed out if they were alive today. The pursuit of scale at all costs has become fully intertwined with how we develop, design, and govern technologies. This is the argument of Alex Hanna and Tina M. Park in Against Scale: Provocations and Resistances to Scale Thinking. The authors describe it as a way of thinking. They write, scale thinking “frames how one thinks about the world (what constitutes it and how it can be observed and measured), its problems (what is a problem worth solving versus not), and the possible technological fixes for those problems.”
It might feel weird to question the value of scale, but that just goes to show how deep these norms go. Hanna and Park argue that scale thinking thrives in part because “scalability is imbued with moral goodness because it centers on designing a system that is able to serve a greater number of people with fewer resources over time.” Indeed, scale thinking is intertwined with the notion that efficiency is inherently good, and that we should in fact be optimizing for the greatest number of people, with the fewest resources. This is an ideal I tried to dispel in my primer theme number six, where I explained that building-in inefficiencies can help reduce big, systemic problems like the amplification of misinformation.
Scale thinking leads technology companies to design products and platforms in a way that assumes every user is the same. Hanna and Park put it this way:
“Heterogeneity becomes antithetical to scalability, because the same product/service can no longer be duplicated to sufficiently serve a suffuse audience. A varied user base means that many different solutions are needed, rather than a scalable solution.”
The presumed standardization and universalization of the user torque context and human complexity into lil’ datafied units that are easily quantified and made interchangeable. This typically leads to systems that accommodate people like me, and harm those in a less privileged societal position. This connects to primer theme number five or the idea that how one experiences a new technology partially depends on their societal position. Hanna and Park explain that when a system prioritizes scale, it requires users to all be the same — which means that anyone sitting outside of whatever narrow parameters the system is using, will be harmed.
Scale thinking also leads us astray in how we approach governance. The logic goes something like this:
Unquestioned premise: Scale is inherently good.
Subsequent logic: We need systems of governance and redress that operate at scale.
Solution: Wahoo, automated content moderation will solve all of our problems.
Here’s the thing: we know automated systems won’t solve things like misinformation and hate speech on extremely large-scale social media platforms. Things will stay this way if we continue to believe that scale is inherently good; we are continually leaving ourselves open to the reasonable-sounding — but totally wrong — argument that one day automated systems will moderate our problems away. As professor Niloufar Salehi argues,
“What is needed is not more sophisticated ways to identify and remove offending content—just as we don’t need better ways of policing and imprisoning people—but ways of supporting survivors and transforming the societies in which harm happens, including our online social worlds.”
Plus, the automated systems used by social media companies are actually quite good — they get it right a lot of the time — but the problems lie in the tails. And the tails, at scale, can cause significant harm.
So what concepts might we use to replace ‘scale thinking’? One might be subsidiarity. As Amy Hasinoff and Nathan Schneider write in From Scalability to Subsidiarity in Addressing Online Harm, subsidiarity began as a theological principle or “a way of understanding the social order in its relationship to the Church and to God.” But, as they continue, it has become “the principle that local social units should have meaningful autonomy wherever possible while maintaining their connections and responsibility to the larger systems in which they exist.”
So in the context of a platform like Facebook, subsidiarity might mean that Facebook HQ creates tools that empower communities to self-organize into smaller units, thus devolving the authority to make certain decisions to the groups themselves. In his book, Schumacher explained that subsidiarity actually leads to a more legitimate relationship between smaller units and the larger systems that they exist within, saying that, “loyalty can grow only from the smaller units to the larger ones, not the other way around — and loyalty is an essential element in the health of any organization.” This means that the health and legitimacy of an online ecosystem such as Facebook’s depends on the loyalty of the smaller communities within.
It wouldn’t be legitimate then, in this view, for Facebook HQ to assert decisions onto its communities — top-down prescriptions like this are never going to work for everyone in such a vast network. Moreover, if community groups did have the power to make their own decisions, Facebook HQ would have less responsibility over meeting their needs, which is something they routinely shirk in the current setup. Responsibility would only revert back to Facebook HQ if smaller units couldn’t execute their decisions by themselves.
This would help to redistribute power within platform ecosystems, but Hasinoff and Schneider are excited about the concept of subsidiarity for another reason: it creates a space for context in addressing online harms. They write that the usefulness of subsidiarity is that it “invites forms of repair that are sensitive to the context where the harm occurs while also contributing to the health of the larger system.” It also invites new platform designs that align with community governance, not top-down universal solutions.
So what might this look like in practice? Well, platforms would design architectures and governance procedures that empower user groups to develop processes that are actually fit for purpose. One such process is transformational justice, which has been developed by and for communities of color, indigenous communities, people with disabilities, and queer and trans communities. See, most online and offline practices of accountability and harm reduction (e.g. de-platforming, canceling, prison, being expelled from school) are punitive in nature, and therefore create the kinds of violence and harm they purport to alleviate.
By stark contrast, as practitioner Mia Mingus writes, transformative justice “seeks to respond to violence without creating more violence and/or engaging in harm reduction to lessen the violence.” These approaches look to transform the underlying conditions that led to harm in the first place. Often then, they zoom out from the victim and the perpetrator, involve the broader community, and work backward from a desired future state. It’s not enough to return to the same systems minus harm and violence because the systems themselves are harmful. Rather, as Mia Mingus writes,
“It is not only identifying what we don’t want, but proactively practicing and putting in place things we want, such as healthy relationships, good communication skills, skills to de-escalate active or “live” harm and violence in the moment, learning how to express our anger in ways that are not destructive, incorporating healing into our everyday lives.”
All of this work requires trained facilitators and practitioners, a process that is contextual to the community, careful attention to the needs of the participants, and a creative, responsive process.
🧑🤝🧑 If you want to learn more about these approaches, check out this video (which includes one of my favorite thinkers, adrienne maree brown) or this paper by Anthony J. Nocella II, both of which I found thoughtful and clarifying.
This approach is completely at odds with scale thinking because it abjures specific outcomes, standardized processes, and universal solutions. Scholars like Jung Jin Choi argue that focusing on outcomes (e.g. an apology, closing the case) “de-centers the person who has been harmed and risks re-traumatizing them.” Moreover, the various localized problems that are caused or compounded by the effects of a wider system cannot be solved with standardized procedures. If the problems are messy, the solutions will be messy too — and that’s okay. As organizer and educator Mariame Kaba explains,
“We have to embrace the messiness of [the] process. The messiness is inherent. It will always be there. And by messy, I mean that there are multiple U-turns that are happening all the time, that people are sometimes their best selves and sometimes not, that we move forward in some places and backwards in another, and that all this stuff is actually part of the work.”
It’s in the pursuit of standardized processes or a specific outcome that we step out of the present context; that we speed past the messy, non-linear work of healing and resilience. As historian Bench Ansfield and organizer Jenna Peters-Gordon put it, “If we reach for ‘success,’ we are undermining the work.”
So, our updated logic might look something like this:
New premise: Scale isn’t all that and a bag of chips.
Subsequent logic: Cool, now we can situate governance and redress at the community level, which allows for context.
Solution: Let’s give transformative justice a try!
Annoyingly, any speculating I do about the future of scale is laced with a skepticism that was born out of being steeped in scale thinking for so long. I struggle to imagine a situation where social media companies start hiring transformative justice facilitators to their Trust & Safety teams, which have been severely pared back or completely dissolved.
But there’s a reason why I often present you with an unquestioned premise and follow it with alternatives: it’s because unlearning widely accepted ‘logics’ is half the battle. It’s in identifying and letting go of these ways of thinking that we can start to imagine alternative systems. For instance, ones where we stop thinking bigger, and start thinking smaller.
As always, thank you to Georgia Iacovou, my editor.