The illusion of ChatGPT
PLUS: My favorite articles on AI discrimination and a new paper on AI governance.
Whenever a little kid watches a magic trick, they become entranced. But even once they realize it’s a trick — and not actually magic — a weird thing sometimes happens: they remain entranced. A similar thing often happens with new technologies: we buy into a fantasy about the technology, but even when it is revealed to be an illusion, we still cling to it. Let’s apply this idea to ChatGPT, shall we?
In the 1960s, MIT computer scientist Joseph Weizenbaum created a chatbot called ‘ELIZA.’ While Weizenbaum was clear that ELIZA couldn’t actually understand what people said to it, that didn’t stop people from projecting understanding onto ELIZA. Weizenbaum wrote that he was “startled to see how quickly and how very deeply people conversing with ELIZA became emotionally involved with the computer and how unequivocally they anthropomorphized it.” This became known as the “ELIZA effect.” But as LibrarianShipwreck recounts, Weizenbaum became a much more vocal critic of AI upon realizing “that even once the processes were explained many people still bought into the ’illusion.’” Weizenbaum was prescient in noting that “A certain danger lurks here.”
There are at least a few components that contribute to this ‘illusion.’ According to LibrarianShipwreck, one is that everything beneath the hood of the technology remains hidden. Recall that we don’t know the corpus of data OpenAI uses to train ChatGPT. Nor did we know that OpenAI outsourced the job of labeling violent, sexist, and racist text to Kenyan laborers. All of this is rendered invisible, replaced by a clean product interface.
👾 If you want to dive into the often hidden labor behind AI, check out this great piece by, and then sign up to his newsletter, .
Another element of the illusion is the idea of “enchanted determinism” theorized by scholars Alexander Campolo and Kate Crawford, which they define this way:
“A discourse that presents deep learning techniques as magical, outside the scope of present scientific knowledge, yet also deterministic, in that deep learning systems can nonetheless detect patterns that give unprecedented access to people’s identities, emotions, and social character.”
Already, the discourse surrounding ChatGPT is mystical. For example, at the World Economic Forum, Coursera CEO Jeff Maggioncalda said “It looked like magic,” adding that it is a “game changer” that is “blowing my mind.” I’m not picking on Maggioncalda - his views aren’t all that different from popular press accounts, which continue to highlight how ChatGPT “learns,” “thinks,” or “feels” rather than it being a fancy version of autocomplete that only ever offers approximations. Here are a number of headlines collected byand over at , which is a great newsletter - you should sign up!
So what exactly is the ChatGPT illusion?
Keep reading with a 7-day free trial
Subscribe to Untangled to keep reading this post and get 7 days of free access to the full post archives.