The Copycat Economy
Why Generative AI’s ROI Is Stuck at Zero
📖 Favorite Finds
“AI for social good” is oxymoronic. In a great essay, scholar Abeba Birhane shows how AI exacerbates the complex social problems it promises to address. Birhane explains how AI tools are unreliable, backward looking, exacerbate bias and push Eurocentric views, encode injustices in their models. And yet, companies position them as key solutions to complex social and political problems. As I wrote in Let’s get rid of ‘AI for good’, “Absent specificity and choices that confront real tradeoffs, ‘for good’ just becomes a self-justifying refrain for those in power to maintain it.”
Sycophantic Companions: Seventy-two percent of teenagers have talked to an AI-companion, and the FTC wants to know how Big Tech is shaping those conversations. The FTC is seeking information about how tech companies develop AI companions — how they monetize engagement, develop and approve characters, measure, test, and monitor for negative impacts before and after deployment etc. Sure, it’s only an inquiry. What the FTC does with that information is TBD. But it’s an important step in determining how these chatbots function, and whether, as I’ve argued in the past, sycophancy is isn’t a quirky personality trait, but a chatbot designed and optimized for engagement.
AI <> misogyny: AI is the “new frontier in the subjugation of women,” explains Laura Bates, author of the new book, The New Age of Sexism: How AI and Emerging Technologies Are Reinventing Misogyny. Right, anyone can now generate a hyper realistic pornographic image of a women or girl from images of fully-clothed women online — it turns out, most of these undressing apps don’t work on men’s bodies. Or take the rise of AI girlfriends, which, as Bates explains, “relies on the presentation of a hugely misogynist idea of what a relationship is and should be, what a woman is and should be.” Bates continues, “She is utterly subservient and submissive and there to flatter your ego.” Generative AI doesn’t simply reflect misogyny, it shapes and amplifies it.
The Copycat Problem
The Census Bureau tracks the adoption of generative AI in large companies, and its latest data shows the steepest decline since they started collecting numbers. A new MIT Media Lab study found that 95% of organizations are seeing zero return on their investments. And in another survey, CEO confidence in their AI strategies collapsed from 82% in 2024 to just 49% in 2025. What’s up with the decline in adoption, returns, and confidence? To answer that, let’s take a quick tour through some organizational theory.
When life feels uncertain, people look around and copy what others are doing. Organizations do the same. As DiMaggio and Powell wrote in The Iron Cage Revisited, when technologies are poorly understood or systems are uncertain, companies often model themselves on competitors. This is what they call “mimetic isomorphism.”
That’s what’s happening with generative AI. Firms didn’t jump in because ROI was proven, or because it solved a problem no other tool could. They adopted it because everyone else was adopting it — because they didn’t want to look like they were falling behind. Copycat behavior inflated the hype bubble, with every company reinforcing the narrative: “Generative AI will be transformative, and we’re already on the edge of innovation.”
Copying explains the rush in, but not the uneven returns. For that, we need Wanda Orlikowski’s “structuration theory.” In The Duality of Technology, Orlikowski argued that organizations approach technology with a flawed mindset: as if tech were an object with inherent, causal impacts. In reality, technology only matters through practice — through how people actually use it, and the meaning they make from it. She called this “technology-in-practice”: a recursive process where humans shape technology as they use it, and in turn, those enacted technologies shape organizational routines and structures. Over time, these patterns become institutionalized.
That’s why ‘plug-and-play’ AI doesn’t exist. Adoption is slow, uneven, and deeply tied to context. A few examples:
A marketing team may use ChatGPT to draft copy, enacting it as a productivity booster — raising the baseline expectation of speed.
A policy team may restrict its use with long compliance playbooks, enacting it as a risk to be contained.
A research team may treat it as a thought partner, enacting it as a collaborator in ideation.
Same tool. Three very different futures, and three different meanings. As Kristina McElheran at the University of Toronto puts it: “AI isn’t plug and play. It requires systemic change, and that process introduces friction, particularly for established firms.” Changing routines, governance, and organizational culture takes time. This helps explain the disconnect between hype and results. Adoption is uneven because companies bring generative AI into existing cultural practices, routines, and power structures.
I was recently asked to participate in a longitudinal AI expert survey that aims to forecast AI’s impact on society and the economy. I don’t expect to come close to reality. I don’t think we can predict the future of AI’s impact on complex systems — I’m mostly interested in how predictions themselves relate to power. What I do believe, though, is this: Change will play out more slowly — and more unevenly — than the hype suggests. Because it’s not about the technology itself, but how it is enacted, contested, and institutionalized in practice.
“We need to view the world as continuously producing newness, within which we create patterns of relative stability.” Bill Sharpe
Before you go: 3 ways I can help
Systems Change for Tech & Society Leaders - Everything you need to cut through the tech-hype and implement strategies that catalyze true systems change.
Need 1:1 help aligning technology with your vision of the future, not the other way around? Apply for advising & executive coaching here.
Organizational Support: Your playbook for navigating uncertainty and making sense of AI — what’s real, what’s noise, and how it should (or shouldn’t) shape your system.
P.S. If you have a question about this post (or anything related to tech & systems change), reply to this email and let me know!


