Hi, it’s Charley, and this is Untangled, a weekly-ish newsletter on technology, people, and power. 2023 is just about over, which means three things:
🤠 First, I’m spending a few days in Joshua Tree to reflect on the last year and set a few intentions for the year ahead.
🥂 Second, It’s almost 2024! Start the new year off with an investment in your understanding of the complex tech and society issues of the day.
💥 Third and final thing: it’s time to recap the top five posts of the last year.
5. ✋ Bigger isn’t better; it’s harmful
In March, I wrote about the problems of ‘scale thinking’: how it has affected engineering and software development and a few key concepts that might bring us back from the brink, like subsidiarity and transformative justice. I ended it this way: “It’s in identifying and letting go of these ways of thinking that we can start to imagine alternative systems. For instance, ones where we stop thinking bigger, and start thinking smaller.” I’ll spend part of 2024 imagining alternative systems. I hope you’ll join me.
4. 💫 Technically Social
In May, I published a special issue that asked the uncomfortable question: uh, what even is ‘technology’? Then I spent 3,000 words trying to answer it. It’s not comprehensive but it’s a good start for anyone who wants to understand what technology is and the mechanisms by which it is entangled in social systems. I dug into five:
Technology evolves in combination with other technologies.
Technology shapes societal values. But technology itself is shaped by the values and beliefs of its designers.
Technology shapes and constrains human agency.
Technology is a tool or a system.
Technology is ecological.
Enjoy!
3. 👀 The artificial gaze
Imagine if society — for all its messy, entangled, complexity — were one big, algorithmic system: what would it be optimized for? What are the implicit objectives that — either consciously or not — shape organizations, societal dynamics, and ourselves? I’ve previously singled out efficiency and pursuing scale as one such objective. In September, I wrote about another element of our modern-day optimization function — unrealistic beauty standards — and how artificial intelligence might alter it. Toward the end, I drew on
’s great new book, Flawless: Lessons in Looks and Culture from the K-Beauty Capital, which offered three ideas to turn the corner on our relationship to unrealistic beauty standards and appearance labor: embodiment, mutuality, and worthiness.2. 🧠 The problem with Elon Musk’s first principles thinking
In November, I wrote about the problem with Elon Musk’s ‘first principles thinking’ and the presumption of value-neutral knowledge. I argued that we need to shift our thinking away from the idea that knowledge is objective and stable, to thinking of it as something that’s situated, partial, and socially constructed. To do that, we need to relax our assumptions about the following:
Expertise and ‘the hard sciences’: Sure, let’s save a seat at the table for engineers and physicists, but if we don’t make room for sociologists, anthropologists, political economists, and especially those affected by the sociotechnical systems, then we’re missing an important part of the table!
That facts are stable: in actuality, they are contingent. Sure, some facts endure unperturbed but others evolve or change depending on the context.
What is knowable: thus embedding more uncertainty and humility into our decision-making and policymaking. Historically, policymaking entails people making decisions based on research and evidence and finding that the problem persists or gets worse. This demonstrates a need for more uncertainty and humility, but also more flexible frameworks.
Relaxing our assumptions about what’s stable and what’s knowable can be scary. It requires a letting go of control and an embracing of uncertainty. But this mindset shift can also be empowering. As anthropologist David Graeber put it, “The ultimate, hidden truth of the world is that it is something that we make, and could just as easily make differently.” It’s actually in relaxing our assumptions about what’s stable and knowable that we confront our agency; our ability to change existing systems and take the brave steps to build new ones.
1. 🤖 AI isn’t hallucinating. We are.
ChatGPT doesn’t have a mind, so to say it ‘hallucinates’ anthropomorphizes the technology, which is a big problem. But in June, I argued that ‘hallucinate’ is the wrong word for another important reason: it implies an aberration; a mistake of some kind, as if it isn’t supposed to make things up. But that’s exactly what generative models do — given a bunch of words, the model probabilistically makes up the next word in that sequence. Presuming that AI models are making a mistake when they’re doing what they’re supposed to do has profound implications for how we think about accountability for harm in this context. Want to find out what they are? Give the most-read issue of Untangled a read.
Okay, that’s a wrap on 2023. See you in 2024!
Charley