🔮Alternative AI futures, algorithmic reparations, and the (!) AI ‘killer app’
PLUS: Next week's big announcement.
Hi, it’s Charley, and this is Untangled, a newsletter about technology, people, and power.
👉 Can’t afford a subscription and value Untangled’s offerings? You only need to reply to this email, and I’ll add you to the paid subscription list, no questions asked.
The main value of Untangled is the archives. I write evergreen essays that offer tools, conceptual frameworks, and lots of social science research — to inform how you think about technology, and your daily actions and decisions. That’s why I wanted to reshare the three most popular essays of 2024 from the archives:
Read these, immediately experience FOMO over the archives (what other gems are you missing out on??), and then relieve your anxiety by becoming a paid subscriber.
July is all about what humans can do that AI systems cannot:
I published an essay on several of these things: meaning-making, retrospection and introspection, imagination, developing mental models, and understanding concepts.
I shared a conversation with Vaughn Tan, a University College London’s School of Management professor, about what it would look like to put meaning-making — our ability to make subjective decisions about what matters and why — at the center of AI product development.
Today I’m digging into how we might build alternative AI futures. Apropos of nothing, I have a big announcement coming next week. And since I’m not all that subtle, I’ll say that it has something to do with this:
On to the show!
🔗 Some Links
In “Gen AI: To Much Spend, Too Little Benefit,” Goldman Sachs skewered the economic promise of AI, writing in a recent report, "Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful for even such basic tasks." Part of the reason the technology isn’t paying off is that it hasn’t found product market fit. Indeed, it’s not really a product at all. As technology analyst Benedict Evans wrote in the essay, “The AI Summer”
Stepping back, though, the very speed with which ChatGPT went from a science project to 100m users might have been a trap (a little as NLP was for Alexa). LLMs look like they work, and they look generalised, and they look like a product - the science of them delivers a chatbot and a chatbot looks like a product. You type something in and you get magic back! But the magic might not be useful, in that form, and it might be wrong. It looks like product, but it isn’t.
The other part? Jim Covello, the Head of Global Equity Research at Goldman, argues that the results are often “illegible and nonsensical,” while the infrastructure costs are significant. That’s a dressed-up version of my long-time take: AI chatbots are bull-shitters. In either case, welcome to the resistance, Goldman!
Goldman Sachs and Benedict Evans clearly haven’t listened to the great new podcast,
by journalist . Otherwise, they’d already know the killer generative AI application: responding to scammers and spammers on your behalf. I kid, but Ratliff creates a voice clone of himself, connects it to an AI agent, and in episode two, the agent interacts with calls from scammers and spammers. My favorite moment in the episode is when a scammer asks Ratliff’s agent for its zip code and it responds ‘90210.’ It’s a smart and funny episode that elucidates some of the philosophical questions these tools pose and offers a glimpse into how they work (and mostly don’t!) in the wild.Another application? Stealing the words of journalists to create automated websites on the cheap! In “I Paid $365.63 to Replace 404 Media With AI,” Emanuel Maiberg used ChatGPT and a constellation of technologies and services (e.g. WordPress, Fiverr, Google, etc.) to create an autonomous website that could infinitely replicate passable news articles. The technologies stole stories from across the internet and remixed them ever-so-slightly before publishing them. While Maiberg was struck by how cheap and easy the process was, he also came away optimistic about the future of journalism, writing “I also learned that while doing this is profitable to some, the practice relies on a fundamental misunderstanding of what journalism is, what makes it good, and therefore gives me more confidence than ever that a fully automated blog will never be able to replace 404 Media, or other investigative news outlets.” These tools can replicate and remix the words of others, but they can’t replace reporting — it’s up to us to tell the difference and value it.
🏆 Favorite Paper of the Week
High-quality data is running out, so companies like OpenAI are turning to synthetic data (i.e. data generated by their models) to train future models. I wrote about this in a 2023 essay, “The Doom Loop of Synthetic Data,” wherein I hypothesized that using synthetic data to train AI models would lead to model collapse. Well, in their recent paper, “Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias,” Sierra Wyllie, Ilia Shumailov, and Nicolas Papernot show that it’s worse than that. They’re right — the feedback loop doesn’t just affect the next model, it pollutes the broader data ecosystem leading to a “distribution shift” that “encodes its mistakes, biases, and unfairnesses into the ground truth.” But the research paper also piloted an ‘algorithmic reparations’ approach to address some of these harms, and it made a tangible difference.
If you want to read about the topic of algorithmic reparations, check out my essay “Beyond minimizing harms.”
🔮Alternative AI futures
I concluded “If the medium is the message, what is the message of AI?” this way:
“By focusing on whether we’re replaceable, we’re actually giving up what makes us irreplaceable: the ability to imagine, decide upon, and build futures that depart from the past.”
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.