Your Responsible AI PDF Is Lovely. Now Let’s Make It… Real
PLUS: A new tool you and your organization can put immediately to use.
🖇️ Some Links
I co-authored an academic article, “Troubling translation: Sociotechnical research in AI policy and governance” in the Internet Policy Review. Cool, huh?
A new paper argues that you should stop thinking of automated decision systems as predictive and start thinking of them as policy interventions.
Do you think a chatbot would make better decisions on your behalf than your elected representative? Terrific (albeit, terrifying!) research by the Collective Intelligence Project found that more people consistently answer that question with a resounding ‘Yes.’
Everything follows from how your organization thinks about data and technology. So stop thinking about them (data, technology) the wrong way, okay? Sign up for this free event to build community — and explore new mindsets!
From Principles to Practice: How Organizations Can Actually Live Their Responsible AI Commitments

Every organization is now rolling out “responsible AI principles.” Too often, they read like innovation word salad — a few values sprinkled in, plenty of buzzwords, and very little that actually guides action.
There are bright spots. The Kapor Foundation centering sociotechnical frameworks and power. New_ Public reminding us that humans shouldn’t just be ‘in the loop,’ but in “the driver’s seat.” But even strong principles can stay stuck on the page. Or worse, they reinforce a culture of compliance instead of learning and adaptation.
Principles only matter if they’re lived: if they translate into day-to-day practices, habits, and patterns of behavior.
The gap between responsible AI principles and practice is the Grand Canyon of gaps! This is my attempt to help close it.
Practice 1: Surface & Shift Default Mindsets
Responsible AI isn’t just about guardrails, audits, or model evaluations. It’s about how people inside organizations think — the inherited assumptions that shape how problems are defined, how tools are chosen, and whose perspectives count. Three default mindsets quietly undermine responsible AI: the neutrality mindset, techno-determinism, and techno-solutionism.
If you want AI to genuinely advance equity, accountability, and human flourishing, you need to help your organization see and shift these mindsets first.
1. The Neutrality Mindset
This mindset treats data and technology as objective mirrors of reality — as if dashboards simply “report the facts” and models “predict the future.” But data and technology are never neutral; they’re produced through relationships, incentives, histories, and power.
When organizations treat data as objective, four things happen:
Power gets hidden, because the data reflect the perspective of whoever had authority to define, collect, and interpret them.
Context disappears, leading leaders to mistake symptoms for causes.
Bias becomes a technical problem, instead of a systemic one rooted in relationships and history.
Bad decisions get legitimized, because “the data said so.”
Your goal is to help your organization see data not as facts, but artifacts, as the sediment of social interactions, institutional incentives, and power structures. You can tell when the mindset starts to shift when data and metrics become conversation starters, not answers or evidence wielded to validate positions. Leaders will care less about optimizing and prediction and center questions like, “what interaction produced this data and how might we shift it?”
2. The Techno-Determinist Mindset
This mindset says technology is the primary driver of social change — seen in phrases like “GenAI will transform our sector whether we like it or not” or “We’ll fall behind if we don’t adopt.”
It collapses a complex sociotechnical story into a simple causal one. It gives technology agency and removes our own.
Start patterning this shift by regularly centering questions like, “How does the story we’re telling ourselves shift responsibility away from people and institutions?” or “How might we approach generative AI if we assumed it wasn’t inevitable, and centered our own agency?”
This helps teams reclaim their agency and remember:
Responsible AI isn’t about reacting to technology — it’s about shaping it.
3. The Techno-Solutionist Mindset
This mindset believes complex social problems can be “fixed” with new tools. It sounds like:
“Let’s build a chatbot for that.”
“A GenAI tool will solve misinformation / access / capacity.”
But problems like inequitable hiring or misinformation aren’t technical glitches — they’re produced by power, norms, incentives, and history. Adding more tech without shifting those dynamics usually makes things worse.
Red flags you’re in solutionist territory:
The first instinct is to automate or build — no one has asked about context, incentives, or relationships.
The problem was defined without those most affected.
There’s more talk about the tool than the sociotechnical system it will enter.
Shifting this mindset means asking, “What would it take to change the conditions that produce this problem — not just its outputs?”
Practice 2: Cultivate a Sociotechnical Mindset
A sociotechnical mindset begins with a foundational truth of technology (responsible or not!): technology is never just technology. Every system is shaped by—and in turn reshapes—people, institutions, cultures, incentives, and power.
This mindset operationalizes responsible AI by shifting the focus from what a tool does in theory to what it does in context. It helps teams stop treating AI as a neutral artifact and start seeing the web of relationships it lives inside: who it empowers, who it marginalizes, what incentives it reinforces, and what futures it opens or forecloses.
A sociotechnical lens reveals what neutrality, determinism, and solutionism obscure:that the real leverage for responsible AI comes from seeing and shaping the interactions between technology and its social environment.
This is why the Kapor Foundation’s first responsible AI principle—“Utilize a sociotechnical framework to identify challenges and meaningful solutions”—is so powerful. It forces organizations to look beyond the tool and toward the system: the norms, power structures, workflows, and institutional practices that determine whether AI creates harm, benefit, or both.
You can begin practicing this mindset with one simple question:
“What non-technical dynamics—power, trust, incentives, norms—are we skipping over?”
Ask it regularly, and you’ll start to see the system more clearly. You’ll notice different risks, identify more durable solutions, and design AI practices that reflect the full complexity of the environments they enter.
Putting principles into practice isn’t just building safer tools, but building the capacity to see—and shape—the sociotechnical systems those tools inhabit.
Practice 3: Center Your System, Not the Tool
You need to stop asking “what is our AI strategy” and start asking “what future do I want to realize, what about relationships, culture and power need to change in my system to head in that direction, and how might AI help or hinder that?”
Stop shoehorning technology into the future, get clear on your vision and directional goals, and consider how generative AI might enable new ways of doing things — and of course, introduce new harms.
You’ll know this responsible-AI shift is taking hold when conversations stop fixating on “the tool” and focus instead on the system:
the behaviors you want to cultivate,
the incentives you need to realign,
the relationships you want to strengthen,
the harms you’re trying to avoid, and
the futures you intend to meaningfully shape.
A simple, practical intervention can help pattern this behavior:
“Let’s try to describe the problem we’re solving without mentioning generative AI. What emerges?”
This forces teams to articulate the real challenge, independent of technology hype, and then consider AI as one possible—never inevitable—part of a sociotechnical solution.
Start with your purpose, stay grounded in your system, and treat technology as a tool to advance the future you actually want to build.
Practice 4: Remove Jargon & Abstraction
Practicing responsible AI requires clarity—about what a system is, what it does, and what it cannot do. One of the simplest and most powerful responsible-AI practices is to remove jargon and abstraction from your organization’s conversations about AI.
That means: stop using “AI” as a catch-all label. Get concrete. Name the model, the training data, the limitations, and the affordances. As Dave Snowden says, “force the animate spirit back into the inanimate, manufactured object.”
Industry jargon—“hallucinations,” “alignment,” “the model thinks”—anthropomorphizes statistical systems and creates the illusion that they are reasoning agents. This distorts safety discussions, inflates a system’s perceived capability, and obscures very real risks.
A chatbot doesn’t “hallucinate”; it makes probabilistic next-word predictions. That’s not a malfunction to be patched away—it’s a feature of how these systems work, and it has direct implications for safety, accuracy, and appropriate use.
Become the jargon police! Kidding. Intervene with goal of getting specific. Ask:
“When you say ‘hallucinate,’ what technically is happening?”
“If we treat this as a probabilistic system rather than an intelligent agent, how does that shift what we consider a safe or appropriate use case?”
When teams can describe what a system actually does—rather than what the hype says it does—you enable more grounded decisions, safer deployments, and more accurate expectations.
Practice 5: Make-Meaning, Not ‘Use Cases’
Stop treating generative AI like a menu of plug-and-play “use cases.” As Wanda Orlikowski’s work on technology-in-practice shows, technologies don’t have fixed impacts—they become what people do with them. The same model can create completely different futures inside the same organization: marketing might use it to accelerate output, policy might restrict it as a compliance risk, and research might treat it as a creative partner. Those everyday, seemingly small choices—every prompt, rule, workaround, and workaround-to-the-workaround—solidify into patterns that ultimately define what the technology is for your organization.
In complex systems, the tool doesn’t determine the outcome.
The practices, norms, and meaning people build around it do.
So the real question isn’t “Where can we apply it?” but “How is this changing how we think, collaborate, decide, and make meaning together?” Because that is the world the tool is quietly shaping—and where risks, harms, and opportunities actually emerge.
To put this responsible-AI principle into practice, shift your conversations from applications to implications. Instead of chasing “use cases,” ask:“How is this changing how we work?”
This reframes AI deployment from a hunt for efficiency gains to a careful examination of behaviors, relationships, incentives, and power—where responsible AI truly lives.
Practice 6: Sense-Make, Don’t Evaluate
In complex systems, responsible AI requires shifting the question from “What changed?” to “What’s changing?”
This moves you away from a narrow, output-driven evaluation mindset and toward sense-making—a core responsible-AI practice focused on noticing emerging patterns, risks, opportunities, and unintended consequences as they unfold.
Sense-making means bringing diverse actors together to observe the system from multiple vantage points:
emerging relationships,
shifting narratives,
unexpected behaviors,
new feedback loops,
early signals of harm,
subtle changes in how information or authority flows.
Because no single team can see the whole system, responsible AI demands co-sensing—noticing weak signals, forming shared interpretations, and adjusting course before small issues become structural harms.
You can start building this responsible-AI muscle with one simple weekly ritual: Spend 15 minutes asking: “What are you noticing?”
You’ll know you’re practicing responsible AI when insights emerge that no one person arrived with—and when the conversation naturally shifts from “what happened” to “what this might mean.”
Practice 7: Translate Across Difference
If organizations put any of this into practice, it will be because they learned to translate across difference. Because right now, everyone is talking past one another.
Teams talk past one another because they inhabit different frames of meaning. Each group sees the system through its own logic, values, incentives, and epistemology. This happens at the level of technology — what AI is and isn’t, assumptions about how data, technology, and social systems relate — and then again in your organizational context:
The Productivity Frame (e.g. Ops, Marketing, PMs): AI = efficiency, speed, scale.
The Risk/Compliance Frame (e.g. Legal, Policy, Security): AI = liability, exposure, reputational risk.
The Creative/Exploration Frame (e.g. Research, Design): AI = a collaborator or thought partner.
The Ethics/Justice Frame: AI = a site of power, inequity, and potential harm.
So the question for organizations becomes: how do we make these differences visible in a productive way and help people translate across them?
To put this into practice, introduce small, structured interventions such as:
Boundary objects → shared artifacts that different groups can interpret through their own lens but discuss together.
Perspective-taking prompts → “Explain this from the legal team’s worldview” or “How would Trust & Safety interpret this outcome?”
Lightweight role-play → let teams temporarily “inhabit” each other’s constraints to surface hidden assumptions.
You’ll know translation is taking hold not when everyone converges on a single view—if that happens, worry!—but when people with genuinely different lenses can still coordinate effectively because they understand how each other makes meaning.
Practice 8: Add ‘AI’ to Your Decision-Making Framework
If responsible AI principles are going to mean anything, they have to show up in the way your organization actually makes decisions. That means embedding AI explicitly into your delegation and decision-making frameworks — not as a decider, but as a contributor with very clear limits.
Traditional tools like MOCHA and RACI already strain in complex systems, which is why many leaders are shifting toward distributed decision-making approaches that let people act on local knowledge while staying connected through strong feedback loops. But whatever model you use, responsible AI practice requires specifying what role AI can play — and what role it must never play.
AI is great at processing information, spotting patterns, and generating options.
But AI cannot interpret context, weigh tradeoffs, understand power, engage in moral reasoning, or be held accountable. It cannot build trust or steward long-term vision. As New_Public puts it, humans shouldn’t just be ‘in the loop”’— they must remain in “the driver’s seat.” And as IBM warned decades ago: “A computer can never be held accountable. Therefore, a computer must never make a management decision.”
But here’s the challenge: people’s cognition is increasingly blending with AI tools — what the Artificiality Institute calls cognitive permeability and identity coupling. That means it’s getting harder to see where the system ends and human judgment begins. This makes it even more important to be explicit about AI’s role.
So if MOCHA or RACI is familiar to your team, keep it simple: AI should only ever be “Consulted.” And every output it generates must be interpreted, justified, and owned by a human.
This is what it looks like to translate responsible AI principles into practice: not just saying “AI should be accountable,” but designing decision-making so accountability has a place to live. AI can inform decisions — but it can never be the decider.
Fill out this short survey to get the tool I created to start building organizational AI practices & patterns
Before you go: 3 ways I can help
Courses & Trainings: Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change.
Advising & Facilitation: I can help you navigate uncertainty, make sense of AI, and facilitate change in your system.
1:1 Leadership Coaching: I can help you facilitate change — in yourself, your organization, and the system you work within.


It's interesting how you framed the shift from predictive systems to policy interventions. This point truly resonates with the principle-practice gap you highlight. Could you elaborate on how organizations practically recalibrate their data strategy once this distinction is truly internalised? It feels like a key leverage point.