Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. A number of you said you liked the post where I used the primer to contextualize a big news story, shared highlights from the Untangled community, and offered a sneak peek into what I’m reading and writing. So I’m giving that another try - let me know what you think.
🕹 Read to the end for a fun game!
🗞️ Let’s Untangle the news, shall we?
Is everyone in your corner of the internet going gaga over ChatGPT3? Ditto. In fact, over one million people signed up to use it in just five days. Here’s one user asking ChatGPT3 to “write a biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR.”
This might seem fun and harmless but we’re making a few big mistakes in how we talk about what ChatGP3 is, and what it can do. Let’s dig in.
ChatGPT3 is a chatbot created by OpenAI (which is led by Sam Altman, the same person behind WorldCoin) that is trained on a large language model (LLM), which is technologist speak for a statistical model based on gobs of text. We don’t actually know what specific text because OpenAI hasn’t been transparent about the training data they used. Anyway, ChatGPT3 makes probabilistic guesses about the next word that follows in a sequence of text to generate a plausible response. You can think of it like fancy autocomplete.
🤖 Want more details on how the model works? ChatGPT3 trained its initial model on a massive corpus of text from the internet. Then, OpenAI put humans in conversation with the chatbot and had them go through the same exercise of generating an output given a sequence of words. Another set of people called ‘labelers’ rank the best outputs, which are then used to train the first model. The model is then further tuned and updated via feedback from subsequent examples.
The first thing to note is that while ChatGPT3’s outputs can be entertaining, there’s nothing inherently true about them. The model is trained on text from the internet and the internet isn’t exactly a beacon of truth. The internet is us! Come to think of it, I’d personally like a cut of all future revenue from ChatGPT3, what about you? The point is, if you see ‘truth’ used to describe ChatGPT3 outputs, replace it with ‘plausible response.’
The second mistake is using words like ‘learning’ or ‘understanding’ to describe what ChatGPT3 is doing. LLMs aren’t actually ‘understanding’ anything, they’re simply synthesizing strings of text. If you want to dig into these ideas, check out this paper from Emily Bender and Alexander Koller, linguistics professors from the University of Washington and Stanford University, respectively. Oh, and if you hear anyone refer to what ChatGPT3 is doing as ‘learning,’ replace it with ‘making probabilistic guesses.’
So if the outputs aren’t necessarily true and the methodology is like a fancy game of guess-and-check, what exactly is ChatGPT3? Well, as a number of computer scientists and astute writers have suggested, ChatGPT3 is a bullshitter. In “On Bullshit,” philosopher Harry Frankfurt defines bullshit as speech that is ”intended to persuade without regard for the truth.” What’s striking to me about all of the ChatGPT3 examples going around the internet isn’t that many are wrong, but how confidently wrong they are. The answers sound authoritative, even if the output is only ever probabilistic. In short, with ChatGPT3, entertainment and confidence substitute for truth and meaning. If that’s not Trumpy, I don’t know what is.
How will AI chatbots like ChatGPT3 become entangled in society? Deborah Raji and Abeba Birhane detail the myriad ways in which these tools have and will harm those with marginalized identities. On the need for accountability, they write “People get hurt from the very practical ways such models fall short in deployment, and these failures are the result of their builders’ choices—decisions we must hold them accountable for.” Ben Thompson of Stratechery thinks it will change us from interrogators - think of what you do when searching Google — to editors. It’ll become our job to edit confident but often wrong text for accuracy. It’s hard to know for certain, but both outcomes sound plausible enough 😉
✋Stop, Primer time! (Obviously read to the tune of ‘Stop, Hammer time!’)
Why all the focus on specific words? As close readers of primer theme number two will remember, a lot is at stake in the narrative frame we use to understand emerging technology. I wrote, “In tacitly adopting a frame, we’re aligning ourselves with a set of interests, values, and politics, often without knowing it.” See, by referring to ChatGPT3 as an ‘AI chatbot’ we’ve already seeded ground on what constitutes ‘intelligence,’ and we’re at risk of seeding even more. Think of how different the public conversation would be around ChatGPT3 if we all understood it to be a bullshitter, or principally as a form of entertainment and play, as Ian Bogost suggests?
The frames inform who we anoint as narrators, and, as I write in primer theme number three, we have a history of legitimizing narrators who are experts in technology and not the social systems that the technology is entangled with. This is a big problem because the questions that will shape how ChatGPT3 impacts society are social in nature. Which use cases will be deemed appropriate? How will we understand and use (or not!) the outputs? How should these systems be governed and their creators, held accountable?
👾 So let’s play a fun, Untangled-y game: every time you read an article about ChatGPT3, I want you to do two things:
Identify the narrative frame used to characterize ChatGPT3 and see how it squares with the points above.
Identify the narrators being legitimized. Ask whose perspective and expertise is being privileged, and whose are being sidelined.
If a number of you send me what you discover, I’ll synthesize it and share it back with the entire community. Deal?
👬 Speaking of the Untangled community, you’ve got a lot going on!
- , writer, and former editor-in-chief at Gawker, is collecting informative and funny pieces on Twitter and FTX. Max updates them regularly, so stop by to get your daily fix, and then subscribe to his excellent newsletter, . It has been an Untangled recommendation for a while now!
Want a deeper dive into Twitter and how platforms fail? Sociotechnical researcher
has a great new essay, “What if failure is the plan?” boyd has been obsessing over the concept of failure and how it is entangled with perception, and she makes a compelling case that failure is in Twitter’s future.In their excellent newsletter,
, Princeton computer scientists and Sayesh Kapoor outline the tasks and applications for which ChatGPT3 might be quite useful. One of them is entertainment!
📚 Reading at scale
For an upcoming essay on the concept of scale, I’ve been reading a few great papers, including:
“Against Scale: Provocations and Resistances to Scale Thinking,” by Alex Hanna and Tina M. Park. Scale isn’t just the number of people using a platform, it’s a way of thinking. As the authors write, scale thinking “frames how one thinks about the world (what constitutes it and how it can be observed and measured), its problems (what is a problem worth solving versus not), and the possible technological fixes for those problems.”
“A World of Standards but not a Standard World: Toward a Sociology of Standards and Standardization” by Stefan Timmermans and Steven Epstein. Standards are everywhere, though like classification systems, are sometimes hard to see. Nevertheless, the authors argue that we need to understand the process of standardization as a social one. If we don’t, bad things happen.
If you want to read something a lil’ less wonky, try Adrienne LaFrance’s essay on the concept of “megascale.” It’s a delight, and I return to it regularly.
Okay, that’s it for now.
Charley