⚖️ How can we govern speech fairly on anti-democratic platforms?
Maybe citizen assemblies are a start!
Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. March has been busy, y’all:
I wrote an essay about the problems of ‘scale thinking’: how it has affected engineering and software development, and a few key concepts that might bring us back from the brink.
I synthesized new research that addresses a real important question: do algorithmic decision-making systems even work?
I launched the Facilitation Leadership Lab with my long-time collaborator Kate Krontiris.
I collected practical lessons from the Primer into the first-ever Untangled checklist.
Speaking of the Primer, the second edition comes out next Sunday. If you want to know how more recent issues map to the existing Primer themes or what new themes made the cut, subscribe to the paid edition today.
Now, on to the show!
In my last essay, I wrote about the insidiousness of ‘scale thinking’ and argued that we must abolish from our brains the assumption that scale is inherently good. Did ya do that?
It’s okay, I can wait.
Okay, seeing as expunging scale thinking from our brains might take some time, I thought I’d analyze a practical proposal designed to address platform governance at scale: citizen assemblies! In our anti-democratic times, this might sound like the punchline to a bad joke, but it’s not. We tend to dismiss more deliberative, citizen-led approaches. Put people in charge? 🙈 Would never! But recent research on citizen assemblies and more representative and deliberative approaches show promise. Let’s dig in.
There is research that demonstrates the effectiveness of citizen assemblies which combine representation and deliberative decision-making. In a synthesis of the literature on the science of deliberation, an interdisciplinary group of scholars argues that “Ordinary people are capable of high-quality deliberation, especially when deliberative processes are well-arranged: when they include the provision of balanced information, expert testimony, and oversight by a facilitator.” The scholars point out that these standards of deliberation are not some unreachable utopian ideal — they are actually achievable.
The evidence to date shows that these approaches, which emphasize representation, learning, intentional facilitation, and deliberation, seem to be working. One study found that “ordinary people thinking together can see through elite manipulation of symbolic political appeals.” Another found that “Deliberation can overcome polarization. The communicative echo chambers that intensify cultural cognition, identity reaffirmation, and polarization do not operate in deliberative conditions, even in groups of like-minded partisans.” A third found that “Deliberation leads to judgments to become more considered and more consistent with values that individuals find that they hold after reflection.”
By stark contrast, the most recent attempts to innovate in governance at scale have largely ignored the importance of representation and deliberation. For example, as I’ve written previously, DAOs purport to democratize decision-making power, but so far, they’ve financialized governance. Most DAOs still operate on a one-token one-vote model — which means if you have the most tokens, you have the most voting power.
This is a way of embedding economic inequities into both representation and deliberation. Who gets to voice their opinion, as well as their decision rights, are both distorted by who has the most resources. Moreover, while some DAOs are attempting to create decision-making processes that are seen as legitimate by their members, most still operate on a ‘rough consensus’ model that allows them to ‘move fast.’ And we all know how that ends!
Then there are external advisory councils like the Facebook Oversight Board, Spotify’s Safety Advisory Council or Twitter’s newly announced ‘content moderation council’. These external councils are at the very least somewhat more deliberative than DAOs: they often accept public comments and the decisions reflect a thoughtful, deliberative process. If you want to see what this looks like, check out the decision by the Facebook Oversight Board to overhaul the company’s ‘XCheck’ program, which exempts high-profile influencers and celebrities from enforcement actions. But the types of decisions these councils have remit over are quite limited, as are their powers to effect real change. Hell, Elon Musk already let slip in private that the council would simply provide cover for his decisions. Not very deliberative or democratic if you ask me!
🦸🏻 If you want to dive real deep into the Facebook Oversight Board, Read this blog post or this paper by Stanford law professor Evelyn Douek.
The power of these external councils might change over time but one thing that is unlikely to evolve is how they’re structured — they will always be a small, elite group of people making decisions. Qualified people, no doubt: right now the Oversight Board includes judges, a Nobel laureate, a Pulitzer prize-winning editor, and legal and human rights scholars. It’s exactly the kind of group you’d want to convene to provide smart advice to Mark Zuckerberg, but it’s not — nor will it ever be — representative of Facebook’s entire user base.
But what if it were? That’s the question that Aviv Ovadya is trying to answer by applying lessons from citizen assemblies to what he calls ‘platform democracy.’ Ovadya is a technologist and researcher at Harvard’s Berkman Klein Center, and he has helpfully summarized what we already know about a successful citizen assembly:
🧑🤝🧑 They’re representative. Successful assemblies use statistical sampling to ensure there is a diverse and representative set of backgrounds, experiences, and ideas.
🏫 There is a learning part, where participants learn from experts, discuss the material, and get to know the members of their group.
👂There is deliberation — a facilitator helps the group explore the various proposals and ultimately come to a decision.
💲They’re paid; participants are compensated for their time.
These approaches aren’t just being used to settle small grievances. For example:
The EU convened 800 people by lottery and put them through a deliberative decision-making process to inform future policy across climate change, migration, and health, among other issues.
France used this approach to develop a new climate policy in response to the Yellow Vest protests.
Ireland used a similar approach to resolve political division on abortion.
South Korea used a citizen assembly to make big important decisions on nuclear power.
Okay, so what might this look like in the context of social media? Ovadya imagines a world in which a representative body of users — let’s call them a ‘user assembly’ — are invited to learn, deliberate, and vote on specific policy proposals. For illustrative purposes, he offers the question, “What types of political ads should be allowed on Facebook in the United States, and with what targeting options?”. In this example, representatives from the U.S. would learn from experts across industry, civil society, academia, and government about their preferred policy proposals, and then participate in a facilitated, deliberative process that ends in a vote on the various proposals. In this fictitious world, Facebook would commit to abide by any recommendation that gets over 70 percent of the assembly vote. They would have to provide an explanation for not adopting any recommendation that receives more than 50 percent approval.
Now, if you’re a loyal Untangled reader, a few things might pop immediately to mind:
If minoritized groups are systematically harmed, then a representative sampling might be inadequate. Why not overweight the sample to ensure those most impacted have a more equal voice?
What counts as ‘expertise’ anyway? Sure, those selected will need to have interdisciplinary backgrounds, but we also need to move beyond traditional notions of ‘expertise.’ Let’s avoid replicating structures of power within these deliberative groups, shall we?
Another question that might jump immediately to mind is, uh, why in the world would a company agree to this? Ovadya argues that the platforms are stuck, frustrated, and fearful. In my limited engagement with social media companies, that certainly resonates. There aren’t any good answers to hate speech, mis/disinformation, and polarization — and each platform has grown accustomed to getting hit with fines or reputational damage for each major content moderation or de-platforming decision. Still not convinced? Well, Ovadya is bringing receipts — Facebook is already piloting an initiative akin to his proposal.
This is all well and good, but here’s the main crux of the issue: what matters are the questions themselves, and who gets to pose them in the first place. Ovadya’s example question above accepts the premise of political ads and political ad-targeting, even though these are both hotly contested issues. A Pew Research Center survey found that 54% of U.S. adults don’t think social media companies should allow any political ads, and 77% don’t think it’s okay for them to target users with political ads based on their online activity.
Ultimately, user assemblies are bound by the social media platforms that they represent. The questions, and proposals which come out of them, are put forward by the company, not the assembly. Problem-setting is the key prerequisite to recommending solutions. Therefore, the real power lies in who gets to frame the problems. The question “what types of political ads should be allowed on Facebook in the United States?” would catalyze very different outcomes than asking “should Facebook be able to target ads based on user data?”
What if instead, user assemblies were convened to determine what set of questions future assemblies would decide upon? Or what if we took inspiration from the ballot initiative process, and structured it such that any user could — with sufficient demonstrated support — determine which questions the user assembly deliberated on. User assemblies point us in the right direction, but ultimately, it’s a question of power.
As always, thank you to Georgia Iacovou, my editor.