Last week, I shared my conversation with Karen Hao about how to reclaim our agency from the monolithic and misguided scale-at-all-costs approach to generative AI. Big Tech wants you to believe that there is only one way to ‘AI’ — but there are many many alternative ways we could conceive, build, deploy, and govern ‘AI.’
We could center consent and allow participants to determine how and in what contexts their data are used.
We could ASK whether communities want the tool in the first place.
We could prioritize participatory design and community ownership.
We could double down on building for specific local contexts.
We could do this all in a way that minimizes the costs to people’s livelihoods, public health, and the environment.
After my conversation with Karen, I read a new research report from Helen and Dave Edwards, Co-Founder’s of the Artificiality Institute, which offers another way to reclaim our agency: create new meaning frameworks! The report, which stems from workshop observations of 1,000 people using generative AI and follow-up interviews, finds that “Symbolic plasticity” or “the ability to create new meaning frameworks” is the key difference between those who can “reframe their understanding of thinking, creativity, and identity adapt more consciously” and those who “drift into dependency or crisis.” ‘Meaning frameworks’ are simply the frames, metaphors, narratives, and categories we use to make sense of the world. (If you want to dive deeper into this topic, I wrote a Guide not long ago about how to analyze the frames and metaphors that hide power in AI.) And the report finds that if you can nimbly reframe and adapt your orientation to generative AI (e.g. seeing AI as a tool rather than adopting a human vs. machine mindset), you can “maintain clearer boundaries” and “show greater resilience.”
Why is this so hard? Creating new meaning frameworks or shifting your mindset hinges on, among other things, conscious awareness. As I wrote previously, “Our uniqueness as meaning-making meat-bags creates the space between encoding the past and transcending it.” It’s this sliver of conscious awareness that enables you to make small but atypical choice, intentionally adopt a new way of seeing, or take an unpredictable action. I’ve had a meditation practice long enough to know that it is just a sliver; that our thoughts emerge, as if from out of nowhere, and our only choice is whether to follow them, or let them pass.
Big Tech companies want you to automatically follow the next thought, so to speak. Generative AI could be designed in ways that instantiate clearer boundaries, nurture reflection, and separate — across the AI lifecycle — what humans do best (i.e. make meaning!) from machine processing. But Big Tech has taken a vey different tact — one that is frictionless and fluid — because they want you to become so engaged that the conceptual boundary between self and technology starts to blur. They want the technology to become so integrated in your lives that it’s operating in the background, ambient, such that it slips beneath your conscious awareness.
This is your last chance to sign up for Cohort #3 of my course — it kicks off next weekend.
Don’t let tech hype hide what really needs to change - cut through it.
Don’t let technological change happen to you - ensure that it works for you.
Don’t let power imbalances hold you or your system back — learn how to change them.
As the report from the Artificiality Institute explains, this blurring is already occurring. For example, they observed “cognitive blending developing through repeated collaboration” between people and chatbots such that over time, “authorship becomes genuinely ambiguous” for some. The boundary between who or what is responsible for the thought or idea becomes unclear and permeable. Similarly, some participants experienced a kind of “identity coupling” whereby they “assign intention, develop habits of trust, or feel mirrored in ways that reshape how they understand themselves.” Right, as the authors point out — and as we’re finding again and again — these tools don’t have to be sentient for people to form deep and often disturbed relationships.
But these experiences weren’t true across the board. There were always participants who formed healthy, adaptable relationships, and the difference hinged on their ability to create new mental frameworks. Those who applied old, rigid frameworks (e.g. the teacher who only saw generative AI as a tool for student’s to cheat with) felt stuck but those who could create new ways of seeing the technology amidst these identity shifts could “maintain agency through conscious reframing.” It was this conscious reframing that set them apart and protected them from losing their identity and their agency.
I’m not anti generative AI per-say. I’m very much against the inherently anti-democratic, scale-at-all-costs approach that Karen Hao documents, the anthropomorphizing of fancy math, and the frictionless approach that seeks to strip us of the very thing that makes us human: our conscious awareness and ability to make meaning. So how might we center the cultivation of conscious awareness across the AI lifecycle and offer alternative frameworks to understand what it is, what it isn’t, and how to adapt with it in a healthy manner? I don’t presume to know the answer, but I have ideas about where we can all start:
We can chart new research agendas that explore these themes - check out the questions the Artificiality Institute proposes at the end of their report.
We can poke holes in the metaphors, frames, and narratives that tech companies use to obfuscate power and make a single approach to AI seem inevitable. Once you start poking holes, you’ll find that there is a system that you can change — and I can teach you how to change it! (Yes, this entire post doubles as a plug for my course, Systems Change for Tech & Society Leaders, hah.)
We can make visible all the hidden inputs — the stolen data, the environmental costs of data centers, the psycho-emotional costs to data labelers — that make these technologies seem like magic.
We can center meaning making in the development of AI products. Vaughn Tan and I had a great conversation about what that might look like — check it out.
The path forward isn’t easy or clear but it most certainly involves new meaning frameworks and a plastic mindset — and probably some meditation, too.