The Responsible AI Cage
On isomorphism, resilience, and the field's capacity to learn
Hi there,
Welcome back to Untangled. It’s written by me, Charley Johnson, and valued by members like you. Help me make it better and get a free email course?
Today, I’m writing about the responsible AI field’s legitimacy problem, and last week’s debate between Jennifer Pahlka and Erie Meyer. As always, please send me feedback on today’s post by replying to this email. I read and respond to every note.
🏡 Untangled HQ
Coming Up
Stewarding Complexity: What do you do when the system around you runs on command and control — and you know that isn’t working? On Tuesday, Aarn Wennekers and I will get into how to exercise agency in impossible systems.
Cohort 7 of Systems Change for Tech & Society Leaders kicks off in two weeks -- and it closes at 30 participants. Don’t be 31!
Stewarding AI: Were you just tasked to lead your organization’s ‘AI strategy’ or AI task-force’ and you’re now staring at a blank sheet of paper? Stewarding AI: How to Build Responsible Principles, Workflows, and Practices is open for enrollment.
Facilitating groups is a craft anybody can learn – my colleague Kate and I can help you master it. We have two cohorts open and one kicks off in two weeks.
🧶 Deep Dive

The Responsible AI Cage
The responsible AI field has a legitimacy problem. Not a deficit of legitimacy — an excess of it. Organizations are publishing ethics frameworks, convening advisory boards, issuing principles documents, hiring responsible AI leads. Much of this signals membership in the field of organizations that take AI seriously. Anecdotally, it seems like whether it changes how decisions are made, how workflows are redesigned, or how accountability is structured is largely a secondary question.
This is exactly what two organizational sociologists predicted in 1983. Max Weber had argued that organizations become bureaucratic because bureaucracy is efficient — rational administration wins out because it works better. He called this “the iron cage.”
Paul DiMaggio and Walter Powell looked at organizational fields across the twentieth century and noticed something that didn’t fit: organizations within the same field were becoming similar to each other, but the similarity had nothing to do with efficiency. Hospitals looked more and more alike. Universities looked more and more alike. Government agencies looked more and more alike. And the convergence wasn’t obviously producing better hospitals, universities, or agencies.
Their explanation was isomorphism — three kinds, driven by three different mechanisms. Coercive isomorphism comes from shared rules and requirements: government mandates, funder conditions, accreditation standards that force convergence regardless of whether the converging practice works. Normative isomorphism comes from professionalization: as people in a field share training, credentials, and professional networks, they import similar problem definitions and solution repertoires into their organizations. And mimetic isomorphism comes from uncertainty. When organizations don’t know what the right path is, they copy whoever looks like they know. Not because they’ve verified the copied approach works, but because following the apparent leaders feels safer than standing still or trying something genuinely different.
Isomorphism produces fields organized around legitimacy rather than effectiveness. Organizations start to resemble each other not because the shared practices work, but because the shared practices signal membership. The question shifts from “does this make us better?” to “does this make us look like the kind of organization that takes this seriously?”
The question the field won’t ask
Principles documents are isomorphic outputs. They make organizations look like responsible AI organizations without necessarily making them behave like responsible AI organizations. And the field has organized its expectations accordingly — which means the social pressure runs toward producing the document, not toward asking whether the document is changing anything.
The deepest version of this problem is what the cage actually encloses. The field has organized itself around the question of how to adopt AI responsibly, which presupposes that adoption is the right answer. Organizations that ask “should we adopt AI at all — here, in this context, for these people?” look naive, irresponsible, or behind. Not because the question is wrong. Because the field has developed expectations that make non-adoption look like a failure of sophistication. The question has been foreclosed — not through deliberation, but through the accumulated weight of grant conditions, professional consensus, and the social cost of appearing to not understand the moment.
That’s the cage. Weber, who described the original version, would recognize it immediately: a structure that perpetuates itself not because it serves human ends, but because deviation from it has become costly.
What a field loses when everyone agrees
A field organized around legitimacy rather than effectiveness is also a field that has quietly lost its capacity to adapt.
Dave Snowden identifies premature convergence as one of the most dangerous failure modes a complex adaptive system can experience. Not because it stops the system from functioning — it doesn’t — but because it stops the system from learning. A system that has converged around a single dominant approach loses the variety it needs to sense when that approach has stopped fitting the problem. It can optimize. It cannot adapt. It produces outputs that look increasingly coherent and may be increasingly disconnected from the conditions they’re supposed to address.
This is the resilience problem that isomorphism creates. A responsible AI field in which every organization produces similar principles documents, similar ethics frameworks, similar advisory board structures is a field with low variety. When the dominant approach stops working — when the principles don’t change practice, when the frameworks don’t catch the harms, when the accountability structures turn out to be decorative — the field has no contrast to learn from. The organizations that refuse, that ask harder questions, that want to reimagine genuinely different approaches: those are the sources of the field’s adaptive capacity. And isomorphic pressure systematically marginalizes them.
Resisting the frame
I couldn’t help but reflect on this while reading the back-and-forth last weekend between Erie Meyer and Jennifer Pahlka.
In case you missed it, Erie Meyer argued in a sharp essay for FedScoop that tech billionaires with AI investments to protect are using philanthropic funding to shoehorn AI into government reform work regardless of whether it fits the problem. The requirement to use AI starts the conversation at adoption — it starts with ”how,” not “if” or “under these circumstances.” And there are plenty of reasons to ‘just say no’ to the current version of scale-at-all-costs AI. That’s coercive isomorphism operating in real time: the grant conditions don’t just fund activity, they foreclose alternatives. They start with the tool, not the problem or a reimagining of government’s role.
Pahlka — a genuine reformer, and someone whose career I respect — responded by arguing “Yes, philanthropy should fund AI in government.” Her essay gets a lot right: vendor lock-in is a structural problem, and agentic coding tools could, in theory, shift power back toward government. But it was also a case study in how normative isomorphism works alongside coercive isomorphism — one of the field’s most credible reformers arriving at a position that, however thoughtful, deepens the convergence.
Pahlka’s central move — the one doing the work of the whole essay — is to invoke what she calls “raised expectations.” People are already using AI to navigate medical bills, parse lease agreements, file taxes. Their expectations have shifted, therefore government should meet them where they are.
Those expectations deserve a closer look — not because Pahlka is wrong to take them seriously, but because of where they came from.
Democratic responsiveness means responding to preferences people formed through genuine deliberation about their own interests. It doesn’t straightforwardly extend to preferences produced in part by an industry with a direct financial stake in the response. The expectation of AI-mediated services has been largely manufactured — by the same companies that profit from government adoption, moving at a speed and scale that no democratic process matched.
The public’s own view complicates this further: a recent Pew Research poll finds that AI experts are far more enthusiastic than the general public about AI (56% versus 17%), over half of Americans say they are more concerned than excited, and they’d like more control over how it’s used.
There’s a second problem the isomorphism literature would predict. Reformers who accept the “how not if” frame in order to shape the implementation make the field’s convergence more durable. Their presence legitimizes the project. The critique that the project was wrongly framed in the first place becomes structurally harder to make once credible actors have staked their reputations on making the project succeed. Wiebe Bijker calls this interpretive closure — the moment when a technology’s meaning stops being contested and starts being settled, not through deliberation but through the accumulation of credible actors who bring their legitimacy to a particular interpretation.
The point isn’t that Pahlka is wrong to engage. It’s that her engagement, however thoughtful, is also convergence pressure. For anyone resisting the adoption frame or reimagining alternatives, the position looks naive from inside the field. It looks like you don’t understand the moment. It looks like you’re getting in the way of people doing good work under real constraints. That’s precisely how isomorphic pressure forecloses the questions that would allow the field to learn and become more resilient.
And yet.
Meyer’s essay is doing that work. And the fact that Pahlka responded — seriously, at length, with genuine intellectual engagement — is worth something. These are two people who care about the same things, reasoning from different vantage points, in public. That’s not fragmentation. That’s a field thinking.
What the argument itself reveals
What I find hopeful about this exchange is precisely what makes it uncomfortable. Meyer is pulling on a thread that the field’s isomorphic pressures have made costly to pull on. Pahlka is making the strongest possible case for a different view. Neither is simply repeating the consensus. And the back-and-forth — the fact that it happened, that it was read widely, that it generated this kind of friction — suggests the cage has gaps in it.
A field that makes room for refusal — that treats “should we do this at all?” as a legitimate question rather than a failure of sophistication — is a field that can actually learn. Refusal isn’t obstruction. It’s the variety the field needs to stay honest about what it’s actually producing, rather than what it intends to produce. The organizations that ask harder questions, that insist on starting with the problem rather than the tool, that resist the grant conditions when the conditions are wrong — these are the sources of the field’s adaptive capacity. Isomorphic pressure marginalizes them. A resilient field protects them.
The iron cage Weber described felt total from the inside. But cages are also structures — and structures have leverage points. The fact that this argument is happening, in public, between people who take each other seriously, is one of them. The fact that Meyer’s critique landed hard enough to require a substantive response is another. The fact that you’re reading an analysis of both, through the lens of a forty-year-old paper about institutional isomorphism, suggests the field’s imagination is wider than the cage would have it.
Isomorphism is real. But so is resisting, refusing, and reimagining. The question isn’t whether the cage exists — it does. The question is whether enough people are pulling on enough different threads, hard enough, to change the pattern.
That’s the beginning of weaving something new.
📣 Enjoying Untangled? Let Me Know!
Testimonials help me grow this li’l community -- if you’re getting value out of Untangled, please consider sharing one.
💫 Work With Me
Here are 4 ways I can help:
Facilitation: I can help facilitate your team through complex and fraught dynamics, so that they can achieve their purpose.
Advising: I can help you navigate uncertainty, make sense of AI, and facilitate change in your system.
Organizational Training: Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. (For either Stewarding AI or Systems Change for Tech & Society Leaders)
1:1 Leadership Coaching: I can help you facilitate change — in yourself, your organization, and the system you work within.



Fascinating. Can be extrapolated to explain why companies all suddenly want a "return to the office" and other anti-worker trends. But your analysis is too abstract. Instead of discussing organizations, I'd call out their leaders, many of whom actually act as herd beasts.