How to define your system
A guide for identifying clear boundaries, and avoiding 'unintended consequences.'

This issue is part of a series of practical guides. They offer you the lenses to see your system clearly, and the levers to collectively change it. The series includes:
Guide 3: How to analyze the frames and metaphors that hide power in technology
Guide 5: How to anticipate technology’s impact on different communities.
Guide 7: How to anticipate different community uses of technology.
Guide 8: How to take interdependent (not independent) action.
Today, I’m launching Guide #12: How to define your system. Drawing boundaries around a system — defining it! — is really hard.
Sometimes, we’re too broad — everything is connected to everything! But the bigger risk is in being too narrow. The team repeatedly misses its goals, so we focus on the team, when it turns out, the underlying reason they continue to miss their goals is connected to organizational structures, how decision rights are allocated, what is rewarded and given status within the organization, etc. Or maybe the system extends beyond the organization because the team dynamic is actually influenced by a partner relationship or funder requirement.
When we define the system too narrowly, and then intervene, it surprises us — and we call these ‘unintended consequences’! But the deeper truth is that we inserted artificial boundaries and didn’t see the complete picture.
The ‘boundaries’ we insert often reflect our position in the organization, our training, and our life experience. For example, say an algorithmic decision making system repeatedly harms the same communities. In response, product teams and engineers might work together in an attempt to design the technology in a way that is more ‘responsible’ or ‘fair.’ This narrow focus on the technology itself is understandable given the role and training of engineers and product managers.
Moreover, it’s not totally insane to consider fairness a reasonable goal if you’re a white guy who believes in meritocracy. If unfairness is simply the result of “fallible human biases on the one hand, and imperfect statistical procedures, on the other” as I wrote in “Beyond Minimizing Harms” (Untangled Deep Dive), then it’s plausible to make meaningful change by focusing on the algorithmic system. Together, they’ll draw a nice li’l boundary around the technology and the process of creating it, and then try to make it fairer.
It’s both easy to see how this occurs, and woefully problematic. Right, if meritocracy is a myth and unfairness is instead the result of structural inequities, you can’t realize an unbiased algorithmic system by focusing on the technology. You would have to change the interactions in the world that create the data — that train the model — in the first place! You would need people from different sectors with diverse expertise working across difference to shift the system. In this hypothetical — yet all too real — example, the engineer and product manager constrained the system to a tech problem, and created yet another example of ‘unintended consequences.’
Free Copy of E-Book
Get a free copy of my first e-book, AI Untangled, by filling out this short survey.
Approaches
We need to broaden our aperture - but to where, and how?
There is no ‘right’ boundary. As Donella Meadows writes in Thinking in Systems, “We have to invent boundaries for clarity and sanity, and boundaries can produce problems when we forget that we’ve artificially created them.” But you can get closer to defining your system by 1) remembering that you’re always creating new boundaries, 2) identifying how your role, experience, and training might narrowly constrain what you see, and what you leave out, 3) Generating multiple perspectives on the boundaries of the system, and 4) Clarifying your purpose and the question you’re asking.
Start with the question you’re trying to answer (e.g. why is my team missing its goals?) and offer an initial answer.
As you reflect on it, start to interrogate how you — and the system around you — might shape your initial instinct. With each question, zoom out more and more, from individual > team > organization > partners. Ask questions like:
How might my training shape what I see (and what I don’t)?
How might my role on the team contribute to what I see?
How might my background and societal position shape what I see?
How might my relationships with individual team members shape what I see?
How might my organization’s structure and/or incentives shape what I see?
How might my relationships with external partners shape what I see?
Use these questions to help you make your implicit boundaries explicit and update your initial answer.
Now, go through a similar progression of individual > team > organization > partners, but this time, focus on the substance of the issue. Ask questions like:
How might my actions contribute to the team missing its goals?
How might the team dynamics — processes, decisions, norms, etc. — contribute to the team missing its goals?
How might the organizational structure, incentives, and norms contribute to the team missing it’s goals?
How might the team’s relationships with external partners contribute to missing its goals?
If you want to go crazy, go back through this progression again but this time focus on the interconnections between levels and the dynamics they create.
Now, we’re not trying to identify the underlying dynamics and actually answer the question. Instead, the point is to notice, as you’re zooming out, when the connection between the variable you’re interrogating (e.g. individual actions, team dynamics, etc.) and the question you’re trying to answer starts to feel more tenuous. This is where you draw your first boundary. You would then generate multiple perspectives — from stakeholders in the system — on the boundary you identified, and update it, along with your answer to the initial question.
These questions are specific to an organizational challenge but the approach — of making your implicit boundaries explicit, zooming out to identify the relative contributions of different stakeholders (and their interconnections!), and then generating multiple perspectives on the boundary — can be applied to a program and strategic challenges too.
“When we think in terms of systems, we see that a fundamental misconception is embedded in the popular term ‘side-effects.’ This phrase means roughly ‘effects which I hadn’t foreseen or don’t want to think about.’” - Garrett Hardin, Ecologist
Why Untangled? Because there is no such thing as a ‘tech problem.’ All ‘tech problems’ are entangled in systems structured by power and inequality. If we don’t untangle the two, we perpetuate the status quo in the name of innovation and progress. My job is to help you untangle your system, and teach you the strategies, skills, and tools to change it.
I’d love to see this applied to a concrete, new technology. Maybe even the announcement of a forthcoming OpenAI device. I thought about unintended consequences when I read this from Jony I’ve:
“In their interview, Mr. Ive expressed some misgivings with the iPhone and said that had motivated him to team up with Mr. Altman.”