Hi, it’s Charley, and this is Untangled, a newsletter about technology, people, and power.
This week, I’m sharing my conversation with
, the creator of Unfollow Everything — the tool at the center of a new lawsuit against Meta that could change the internet as we know it.On to the show!
👇 ICYMI
Ever feel like you’re becoming a li’l less human, and a li’l more tool-like? Read this essay about how we’re molding ourselves to AI in pursuit of disembodied perfection and technological transcendence.
I analyzed a new lawsuit that would offer more control to social media users. Pair that write-up with my essay on citizen assemblies and how to cultivate democratic practices online, and you’ve got a blueprint for a better internet!
Two newsletters started recommending Untangled this week —
’s and Jacqueline Nesi’s . Woot! I’m a big fan of both newsletters — give them a read!
🔗 Some Links
Google announced that it would further integrate AI into search. Read my essay on the limitations of chat-mediated search and my take on why this portends the end of the web as we know it.
Big week for OpenAI — it launched a new multimodal model and the co-leads of its ‘Super Alignment’ team quit because “safety culture and processes have taken a backseat to shiny products.” That’s undoubtedly true, but also: ‘safety’ is a problematic frame, and, as I wrote in one of my favorite essays, “Alignment isn’t a problem — it’s a myth.”
In “The Dark Side of Dataset Scaling: Evaluating Racial Classification in Multimodal Models” Abeba Birhane, Sepehr Dehdashtian, Vinay Uday Prabhu, and Vishnu Boddeti find that the likelihood of harmful misclassifications (e.g. associating Black and Latino men as ‘criminal’) increases with training dataset size. Pair this paper with my write-up on the problem with ‘scale thinking’ and the latest special issue, “What does it mean to ‘train AI’ anyway?”, and you’ll see why pursuing scale harms marginalized groups.
In “Generative AI and the politics of visibility,” Tarleton Gillespie demonstrates that “generative AI tools tend to reproduce normative identities and narratives, rarely representing less common arrangements and perspectives.” Want to dig deeper into how AI is bad for representation? Read “The Artificial Gaze” and my write-up on how LLMs can’t represent identity groups.
🛠️ Unfollow your friends!
Last week, I analyzed a new lawsuit brought by University of Massachusetts Amherst professor Ethan Zuckerman and the Knight First Amendment Institute at Columbia University. The lawsuit would loosen Big Tech’s grip over our internet experience if successful. In this conversation, I’m joined by , the creator of the tool Unfollow Everything, which is at the center of the lawsuit. Louis and I discuss:
Unfollow Everything — what it was, and why Louis built it.
What it’s like to receive a cease-and-desist letter from a massive company.
Why this lawsuit would offer more consumer choice and control over our online experience.
The tools Louis would build to democratize power online.
Okay, that’s it for now,
Charley
Share this post