Hi there, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. Last week I brought you behind the scenes at Untangled HQ. Thank you to those who hit reply and introduced themselves. An extra special thank you to the reader who reached out and complimented my shoes! 👟
My original offer still stands — pop into my inbox with a bit about who you are, a question you’re puzzling through, or a reflection on Untangled, and I promise to respond.
Now, on to the show!
In the latest special issue, “Technically Social,” I posed the brave question, “uh, what even is technology?” I’d like to follow that doozy up with another: what is ‘information’? If you stop to think about that question for more than a moment, you might break your brain. Turns out, it’s actually quite hard to define. Yet, how we understand what information is and, importantly, what it’s not, is at the heart of most disagreements over AI. Let’s dig in.
Meghan O’Gieblyn, author of the fascinating book, God, Human, Animal, Machine, writes that “we tend to think of information as something that intrinsically contains meaningful content that must be interpreted by a conscious subject.” Right, information is relational — information is not contained in syntax or a symbol alone. The information in this newsletter is information because it means something to you, the reader.
But this everyday understanding of ‘information’ is actually quite different from how we treat ‘data’ or ‘information’ in AI systems. See, Claude Shannon, the “father of information theory,” actually excluded the conscious subject altogether in his definition. He simply lopped off semantic meaning and said all languages could be understood by their syntax or form. Why? Because that allowed Shannon to claim that language could be manipulated in mathematical terms. Just patterns and probabilities, no semantic meaning to see here! As O’Gieblyn recounts, Shannon knew that “messages have meaning,” but he considered this “irrelevant to the engineering problem.”
While Shannon’s definition turned language into math, around the same time, the cyberneticist Warren McCulloch declared that the mind itself was li’l more than an information-processing machine. He argued that brains “compute thought the way electronic computers calculate numbers.” This created quite the context, as O’Gieblyn explains:
This resulted in a model of mind in which thought could be accounted for in purely abstract, mathematical terms, and opened up the possibility that computers could execute mental functions. If thinking was just information processing, computers could be said to “learn,” “reason,” and “understand.”
Now, McCulloch was also aware of the limitations of his own metaphor. He acknowledged that the mind-as-information-processing-machine was an “idealization of the mind,” as O’Gieblyn writes, not reality itself. The problem with metaphors, though, is that over time, they become internalized as reality, as if we can remove the quotation marks around “learn” or “reason” and nothing is lost. But that’s not true — we lose quite a lot.
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.