How to define your system
A guide for identifying clear boundaries, and avoiding 'unintended consequences.'

This issue is part of a series of practical guides. They offer you the lenses to see your system clearly, and the levers to collectively change it. The series includes:
Guide 3: How to analyze the frames and metaphors that hide power in technology
Guide 5: How to anticipate technology’s impact on different communities.
Guide 7: How to anticipate different community uses of technology.
Guide 8: How to take interdependent (not independent) action.
Today, I’m launching Guide #12: How to define your system. Drawing boundaries around a system — defining it! — is really hard.
Sometimes, we’re too broad — everything is connected to everything! But the bigger risk is in being too narrow. The team repeatedly misses its goals, so we focus on the team, when it turns out, the underlying reason they continue to miss their goals is connected to organizational structures, how decision rights are allocated, what is rewarded and given status within the organization, etc. Or maybe the system extends beyond the organization because the team dynamic is actually influenced by a partner relationship or funder requirement.
When we define the system too narrowly, and then intervene, it surprises us — and we call these ‘unintended consequences’! But the deeper truth is that we inserted artificial boundaries and didn’t see the complete picture.
The ‘boundaries’ we insert often reflect our position in the organization, our training, and our life experience. For example, say an algorithmic decision making system repeatedly harms the same communities. In response, product teams and engineers might work together in an attempt to design the technology in a way that is more ‘responsible’ or ‘fair.’ This narrow focus on the technology itself is understandable given the role and training of engineers and product managers.
Moreover, it’s not totally insane to consider fairness a reasonable goal if you’re a white guy who believes in meritocracy. If unfairness is simply the result of “fallible human biases on the one hand, and imperfect statistical procedures, on the other” as I wrote in “Beyond Minimizing Harms” (Untangled Deep Dive), then it’s plausible to make meaningful change by focusing on the algorithmic system. Together, they’ll draw a nice li’l boundary around the technology and the process of creating it, and then try to make it fairer.
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.