If the medium is the message, what is the message of AI?
You’re a unique li’l meat-bag who can imagine their own future — don’t let AI take that away from you!
Hi, it’s Charley, and this is Untangled, a newsletter about technology, people, and power.
Can’t afford a subscription and value Untangled’s offerings? Let me know! You only need to reply to this email, and I’ll add you to the paid subscription list, no questions asked.
In June, I focused on AI, loss, and grieving. I wrote an essay about how AI might alter loss, grieving, and our ability to move forward, and then I spoke with Caitlin Dewey about using AI-generated images to grieve following her third miscarriage and interviewed scholar Tamara Kneese about what happens online when we die. This month’s theme? What humans can do that AI systems cannot! Let’s dig in.
Marshall McLuhan famously argued that the medium matters more than the content it carries: the medium is the message. E.g. TV has made all news into entertainment. Or, social media has turned everyone into a content creator, and everything into a collective performance of the crowd. So what’s the message of generative AI? As Ezra Klein rightly suggests, it is that humans are replaceable.
But we’re not, actually.
, professor at University College London’s School of Management, argues that we’re posing the wrong question as a society. We’re asking, ‘Can AI systems produce outputs that look like human outputs?’ when a more discriminating question would be: ‘what can humans do that AI systems cannot?’ Tan’s initial answer to this question is meaning-making! (Paid subscribers will get access to a conversation between Tan and me next Sunday.)For Tan, meaning-making is “Deciding or recognizing that a thing or action or idea has (or lacks) value, that it is worth (or not worth) pursuing.” We make subjective decisions about what matters and why all the time — but Tan is concerned we’re no longer conscious of it. That we’re moving through the world unaware of these subtle decisions and judgments. Tan offers four types of meaning-making to make these explicit:
Type 1: Deciding that something is subjectively good or bad. You might decide that “AI is good!” or “AI is bad!” (Or you might read Untangled and decide that whether AI is good/bad misses the point altogether; that these tools are entangled in social systems, and we can’t think of them as conceptually separate phenomena.)
Type 2: Deciding that something is subjectively worth doing (or not). For example, you decided to subscribe to the paid edition of Untangled (right??) because you think it’s worth doing.
Type 3: Deciding what the subjective value-orderings and degrees of commensuration of a set of things should be. For example, I’ve lived in Los Angeles and Brooklyn, and I can tell you that LA is just better.
Type 4: Deciding to reject existing decisions about subjective quality/worth/value-ordering-value/commensuration. Because we’re unpredictable humans and who knows why we do anything ever *goes to therapy.*
Technologists and developers make subjective decisions when building AI models all the time: What objective should the machine try to optimize for? Subjective decision! What data should the model be trained upon? Subjective decision! How should that data be classified? Subjective decision! (Read What does it mean to ‘train AI’ anyway for a litany of subjective decisions.) In short, AI models are imbued with the subjectivity and biases of their creators — and the data produced by the subjective decisions you and I make in the world.
Now, just because technologists imbue subjectivity into machines doesn’t mean those machines can therefore make meaning. Shannon Vallor, professor of Ethics of Data & Artificial Intelligence at the University of Edinburgh, has a new book out called The AI Mirror: How to Reclaim Our Humanity In an Age of Machine Thinking. In it, she uses the metaphor of a mirror to describe AI and argues that a mirror is not a mind. She writes:
“Consider the image that appears in your bathroom mirror every morning. The body in the mirror is not a second body taking up space in the world with you. It is not a copy of your body. It is not even a pale imitation of your body. The mirror body is not a body at all. A mirror produces a reflection of your body. Reflections are not bodies. The are their own kind of thing. By the same token, today’s AI systems trained on human thought and behavior are not minds. They are their own new kind of thing — something like a mirror. They don’t produce thoughts or feelings anymore than mirrors produce bodies. What they produce is a new kind of reflection.”
Right, as I wrote recently, the computational metaphor — or the idea that ‘the computer is a brain’ and ‘the brain is a computer’ — leads us astray. AI systems produce a reflection, which is an altogether different thing. The capital P ‘Problem’ is that AI systems reflect the status quo. They’re trained on past data, shaped by inequitable social systems and human biases, that drive new actions and decisions in the world, that further encode those historically rooted inequities. As Vallor puts it, "What AI mirrors do is to extract, amplify, and push forward the dominant powers and most frequently recorded patterns of our documented, datafied past" -- and we therefore never look forward and imagine what we might become; rather, the mirror shows us who we were and the past follows us into the present.
Our uniqueness as meaning-making meat-bags creates the space between encoding the past and transcending it. According to Tan, meaning-making is what allows a human to behave in unexpected ways, saying:
“Humans can decide that an outcome is good and worth pursuing even if that the patterns of previous human activity would say otherwise. In other words, humans can intend to be unpredictable — the intentional unpredictability is the result of humans choosing to create new meaning for the outcomes they pursue and the actions they take to achieve those outcomes.“
Machines can’t decide what outcomes matter. Nor should they — that’s our job! By handing over mundane and consequential decisions to machines, we’re renouncing our own agency to imagine better futures for ourselves.
Meaning-making isn’t all we have to reclaim — once one focuses on Tan’s question, it becomes clear that there are actually quite several things humans can do that AI cannot:
Reflection & Introspection: In a recent conversation with Alison Gopnik for the Los Angeles Review of Books, AI scholar Melanie Mitchell reminds us that reflection is core to our intelligence whereas LLMs “have no calibration for how confident they are about each statement they make other than some sense of how probable that statement is in terms of the statistics of language.”
Mental Models & Concepts: Mitchell further explains that a concept is “a mental model of some aspect of the world that has some grounding in the truth.” It doesn’t have to be true in the physical world. Mitchell offers the example of a unicorn — it’s not a real thing, but it exists in a fictional world and we can reason about it. She goes on to say “I think these mental models of the way the world works, which involve things that cause other things, are ‘concepts.’ And this is something that I don’t think LLMs have, or maybe even can develop, on the scale that humans do.”
Moral Imagination & Practical Wisdom: AI cannot imagine things it hasn’t encountered in its training data. That’s a huge limitation, and as Vallor writes, something that humans can do quite well: “Practical wisdom is the virtue that allows for moral and political innovation. Because it links our reasoning from prior experience to other virtues like moral imagination, it allows us to solve moral problems we haven’t encountered before.”
What else can a human do that AI can’t? Leave your examples in the comments!
The medium of any message is powerful. As Ezra Klein and others have argued before, we can easily draw a straight line from television to everything-is-entertainment to Donald Trump. But as we consider the message of AI (which, of course, as introspective creatures, only we can do), remember another quote from McLuhan: “There is no inevitability as long as there is a willingness to contemplate what is happening.” By focusing on whether we’re replaceable, we’re giving up what makes us irreplaceable: the ability to imagine, decide upon, and build futures that depart from the past.
As always thank you to
, my editor.
Reflecting on your question "if the medium is the message, what is the message of generative AI," I think the message of generative AI is personalization. If someone is getting information from a chatbot, depending on the specific words they use for their prompt (or the user's chat history), they will get a slightly different output than someone else. This probably results in increased fragmentation of experience / understanding. Interestingly, GenAI even enables people to personalize the medium itself - to transform any input article, diagram, etc. into a new medium that better matches their learning style (like video or audio) - at least, this is theoretically possible once the technology advances further. So, perhaps the influence of the "source medium" on a society's collective perception of the event becomes less important, and "the medium is the message" as a concept becomes relevant mostly on an individual level. It could increase society's collective understanding, by unlocking certain content previously locked in one medium inaccessible to some (e.g., a visual diagram) into a medium accessible to those individuals (e.g., audio), or vice versa.
Another message of this current generation of GenAI could be efficiency over accuracy. People could end up with slightly inaccurate perceptions of events / concepts, unless future generations of the technology iron out the confabulation issue.