The Age of Myth-Making
Digital technologies have changed the way we read and write — what comes next?
In the grand sweep of history, we’ve evolved from an oral culture to a literate one. But now, aided by technological shifts, we’re reading less, we’re writing like we text, and our individual and collective attention is full. So where do we go from here? From orality to literacy to … what, exactly? Let’s dig in.
In Orality and Literacy, Walter J. Ong traces the evolution from orality to literacy, and along the way, conveys the power and import of reading and writing. If we didn’t evolve to read and write, we wouldn’t have knowledge as we know it. As Ong writes, literacy is “absolutely necessary for the development not only of science but also of history, philosophy, explicative understanding of literature and of any art, and indeed for the explanation of language.” Don’t believe him? Try to do a li’l calculus or explain Plato’s Republic without reading or writing. As Nicholas Carr writes in The Shallows, “The written word liberated knowledge from the bounds of individual memory” and “opened the mind broad new frontiers of thought and expression.” Our reading, writing, and critical thinking habits are continually changing in big ways. Here are just a few of late:
The share of Americans who reported reading a book in the past hear has fallen below half. Students in elite colleges are struggling to read a thick book front to back.
The average American’s ability to “reason and solve novel problems” peaked in the early 2010s and has been declining ever since.
The number of 18 year olds reporting difficult in thinking, concentrating, and learning new things started rapidly increasing in the mid 2010s.
How we write is shifting to a kind of “textspeak” that mirrors the frictionless ease and informality of texting and social media.
Large language models are flattening language and prompting us to talk to one another like chatbots.
A recent study found that the use of generative AI leads to less critical thinking and “result in the deterioration of cognitive faculties that ought to be preserved.”
The introduction of new technologies that facilitate how we share and receive information have always stoked a moral panic. Hell, in fourth century BC, Socrates and Thamus worried that the OG technologies — reading and writing! — would make us shallow, unreflective, soulless people. On writing, Thamus argued that “it will implant forgetfulness in their souls: they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of existential marks.” He further thought reading would fool ourselves into believing we were knowledgable, arguing that those who rely on reading will be “filled, not with wisdom, but with the conceit of wisdom.”
But I’m not articulating a moral panic here. Sure, I love books — and wish I didn’t find it so challenging to sustain the attention to read them these days — but people’s brains absorb information in different ways. I think something subtle is lost in the shift from reading books to watching videos or listening to a podcast but I also know that’s snobbish, and privileging a neurotypical experience. And yes, ‘textspeak’ can be shallow, but it can also be punchy and provocative. But it is hard to ignore that reading, writing, and critical thinking have been changing in material ways, leading many to argue that we’re headed back to an oral culture. For example, Jasmine Sun, argues in “The Post Literate Society,” that video is overtaking text on most platforms. Podcasts are on the rise! ChatGPT, by offering a summary, is removing the need to write precise well-structured sentences. I agree with Jasmine that social media and LLMs are changing how we read and write, but I’m less convinced that we’re headed back to an oral culture. And the reason for this is simple: information overload defines our current culture more than any single technology does.
To better understand how we’re changing, we need to return to 1971, and Herbert Simon’s seminal work on attention economics. Simon’s key insight is that information consumes attention, so the critical question when assessing a new technology becomes, “how much information will it allow to be withheld from the attention of other parts of the system.” Analyzed this way, one could argue that generative AI might actually conserve our attention — it arguably generates less information than it takes in. But it still generates a lot of information! And as Chris Hayes writes in his great new book The Sirens’ Call, “Almost any technology that’s good at screening information to preserve our attention is also going to be good at generating things that attempt to capture and exploit our attention.” In short, while generative AI might take in more information than it produces, it will undoubtedly find ways to capture, consume, and overwhelm our attention.
In an age of information overload, I think we’re likely to turn increasingly to myths. As Marshall McLuhan foresaw, “When a man is overwhelmed by information…he resorts to myth. Myth is inclusive, time saving, and fast.” Myth offers a structure and context for efficiently making sense of new information. It does so by tapping not into reason but emotion and the subconscious mind. Take the rise of conspiracy theories. As Carr writes, their spread has “less to do with the nature of credulity than with the nature of faith.” Myths are also tailor made for the current attentional environment. As Hayes writes, “In evolutionary terms, shocking bits of false information outcompete mundane bits of true information.” A story has to shock us to stick. As strange and shocking becomes normalized, Carr explains, “a paranoid logic takes hold” where “strangeness itself becomes a criterion for truth.”
Free Workshop
I’m hosting a free, interactive, one-hour workshop on how to critically analyze data on June 5 at 3pm PT. You will learn:
My approach for critically analyzing data.
How to apply it in your working context and map its implications to your strategy.
What it means for how we should think about AI.
Authoritarians and generative AI both benefit in this world. Both offer confident-sounding declarations without regard for the truth. In fact, both our political environment and our technological one represent the biggest indications that we’re already in an era of myth-making. That these systems ‘reason’ is a myth. That they ‘hallucinate’ is a myth. That we can ‘align them to our values’ is a myth. That ‘AI agents have agency’ is a myth. That we’re on the cusp of AGI is a myth.
In 1956, at the beginning of ‘AI’ research, Herbert Simon and Allen Newell argued that we should call it ‘complex information processing.’ And, in
, Cosma Shalizi explains how the term ‘artificial intelligence’ has done a lot to cultivate a universe of myths that we still grapple with today: the word ‘intelligence’ obscures so much within these systems — and describes a bunch of other stuff that isn’t there. It opens the floodgates for anthropomorphizing and makes it easier to believe that AI can replicate human labor, emotion, and creativity. By believing these myths, we’re inhibiting our own ability to see these systems clearly, understand their societal impact, and take action. Welcome to the age of myth-making.As always thank you to Georgia Iacovou, my editor.