Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. Guess what? It took Untangled one year to reach 1K subscribers but only 6 additional months to go from 1K to 3K subscribers. Check out the growth on my li’l Substack chart.
I’m genuinely thrilled that you’ve joined the Untangled community. 🙏 If you find Untangled valuable, consider upgrading to the paid edition.
Okay, it’s Primer time, y’all!
On Untangled’s one-year birthday, I launched the Primer. As a reminder, the Primer is a special issue that consists of:
🖼️ The frameworks and concepts from past essays, which are organized nicely into themes.
📚Each of these themes connects to the original Untangled posts and provides further resources if you want to dive in deeper.
✔️With each theme, I will also recommend a daily action for you to try in real life, to make these ideas more practical.
When I launched it, I said I would add to the themes as I go. Well, I’m a man of my word, as the saying goes. In this update to the Primer, I’ve mapped recent issues of Untangled to the existing themes and offered two new ones. There are now 10 themes in total - enjoy!
☝️ Theme one: technologies and social systems like gender, race, and power are entangled. I’m nothing if not on-brand!
Each issue of Untangled since November highlighted how technologies and social systems are entangled — I’m still on brand, huzzah!
Rather than go through each, I’m going to tease an upcoming special issue of Untangled — currently titled ‘Technically Social.’ Whereas the Primer focuses more on how social systems (e.g. narrative, community culture, organizational structures and incentives within companies, systems of race and gender etc.), shape technology, Technically Social will focus more on how technology shapes social systems — the materiality of the technology, how it is designed and developed, what it affords, whether it is a tool or arranged in a system, and how its effects are ecological. It is not a perfect complement to the Primer — I didn’t think of this that far in advance — but that’s a useful way of distinguishing between the two.
Like the Primer, I’ll continue to add to it over time, and while everyone will be able to access it for the first few weeks, the initial version — and subsequent iterations — will only be for paid subscribers.
Okay, now back to our regularly scheduled programming.
👬 Theme two: technologies have narratives that shape or hide the societal impact of those technologies.
My writing on ChatGPT underscored theme two. In “ChatGPT is very Trumpy,” I wrote that we make a big mistake when we use words like ‘learning’ or ‘understanding’ to describe what ChatGPT3 is doing. It’s simply a fancy version of autocomplete, unconcerned with the truth, that is making probabilistic guesses about what comes next in a string of words. That’s why I called it a bullshitter, and think engaging with it is best framed as entertainment or a form of ‘play.’
Despite my best efforts, the current frames continue to anthropomorphize ChatGPT and conjure the idea that one day — as it continues to ‘learn’ — it will become sentient. While this won’t happen — it will only ever make probabilistic guesses — these frames obscure the harmful impact of these technologies. I dug into one such harmful impact in “Will ChatGPT replace search?” where I dew on the research of scholars Safiya Noble and Emily Bender to explain how ChatGPT might actually exacerbate the ways search systems already discriminate against people of color. As I wrote then,
“With Google or Bing, the searcher sees racist or sexist results next to others; so they’re nudged to ask, ‘um, where do these come from?’ Now imagine the racist result coming from an authoritative seeming voice that we anthropomorphize as ‘intelligent.’“
In other words, the direct answer offered by ChatGPT comes with a direct cost.
🍨 Theme three: the legitimization of narrators who are experts in technology, and not the social systems that the technology is entangled with — and how this reaffirms or exacerbates the status quo.
Theme three showed up most prominently in my essay, “Can you predict the future?” Here I described those offering tech predictions as selling “uncertainty and change so that they become an authoritative voice in the face of transformation. In short, they create the conditions for their own relevance.” So we legitimize technologists and then they — despite getting predictions wrong a lot of the time — create the conditions for their continued legitimacy. This cycle continues unabated, and since those offering tech predictions tend to be powerful, cis-male people, the imagined futures that capture our collective attention are actually quite conservative in nature. More often than not, they reaffirm the status quo.
This theme also played out in “The Illusion of ChatGPT,” where I observed that most of the journalists and pundits surprised by ChatGPT’s weird and harmful tendencies are, drum roll please, white, cis-male. Because their experience with ChatGPT is unexpected (to them!) they in turn ask the broader public to see these examples as aberrations; as examples of the technology making mystifying mistakes. But as I wrote then, if they or we had been listening to female scholars, and in particular female scholars of color, the mistakes and harms we are witnessing now wouldn’t be so surprising.
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.