Remember playing the game whac-a-mole as a kid? You hit the mole, it returns to its hole, and then, in time, it pops back up, and on and on you go. That’s sometimes how it feels in the field of AI these days: we identify a harm or bias, take a whack at it by tweaking the model, and the harm goes away. For a for a time, anyway. But then eventually it returns, or a new mole pops up. What if we could transition from ‘minimizing harms’ to transforming the sociotechnical systems that continue propagating those harms?
We can, but doing so doesn’t start by making small changes to the training data or the model itself. It starts with three mindset shifts that I explore in my e-book, AI Untangled:
No technology — AI included — is objective or neutral. They shape and are shaped by social systems — the beliefs and biases of technologists, errant metaphors, cultural norms, narratives, existing structural inequities, etc. We must cultivate a mindset that views technology not as objective, but as entangled in social systems.
No technology — AI included — causally impacts society. Social systems are not ordered. They’re unordered and complex. We must cultivate a mindset that doesn’t presume cause-and-effect but sees complex dynamics as emerging from the entanglement of technology and social systems.
Any predictive technology optimized for the world we live in today will reinforce the status quo and/or reflect its limits. This is because society is structured by gross inequities that cut across race and gender. Moreover, what we consider possible today is constrained by our behaviors, beliefs, and assumptions about the world. We must therefore cultivate a mindset that articulates radical new futures and maps backward to consider the role of technologies like AI.
I wrote AI Untangled to help you cultivate these mindset shifts. The e-book is a collection of essays I’ve written over the last two years, broken down into thematic chapters — frames, metaphors, complex systems, and alternative futures. With today’s update, I’ve added new content to the collection.
Here’s an overview of the different sections of the e-book:
Frames
We use narratives to make sense of ourselves and the world around us. Yet the frames we select make elements of a narrative salient and hide or downplay others. Frames aren’t neutral — indeed, they’re often intentionally created and marketed. Ya know how we say “climate change” rather than “global warming?” That was a strategic choice. In tacitly adopting a frame, we’re aligning ourselves with a set of interests, values, and politics, often without knowing it.
In this section, I explore the frames used to characterize AI -- that it is a truth-teller, a magic-maker, and a job-taker -- and interrogate how each encodes values, launders specific interests, and obfuscates power. Along the way, I offer a few alternative frames that we might use to understand better how AI systems are entangled in social systems like race, power, and gender.
Metaphors
Metaphors, like frames, help us make sense of our reality. "All perception is metaphor" as Wittgenstein put it. But then, over time, we internalize the metaphors as if they constitute reality. We use metaphors like "information processing" to help make sense of what machines can do, but then we remove these quotation marks and start to believe that machines really can learn, reason, and understand. We assume that nothing is lost or changed by removing the quotation marks. But that's not true -- we lose quite a lot.
This section explores the key metaphors used to describe AI -- information and hallucination -- and one I wish we used much more often: management consulting firms.
Complex Systems
Complex systems are systems wherein complex structures and patterns emerge from the interactions of simple components, all without the help of a central control mechanism. You can't predict the outcomes of an intervention in a complex system because they're non-linear and the component parts are interdependent. Understanding these parts won't help you understand how the system behaves. The macro behavior of a system can only be interrogated by understanding its dynamics.
This section applies a few key concepts from complexity systems -- ordered vs. unordered systems, dynamics, emergence, and feedback loops -- to socio-technical 'AI' systems, all to help us see them a bit clearer.
Alternative Futures
Too often, we take a new technology and integrate it with existing systems and processes. For example, we use algorithmic recommendation systems to support traditional hiring processes, and then we’re somehow surprised when they replicate inequities typical of that sector. To achieve more just and equitable outcomes, we need to articulate radical alternative futures, and then map backward to consider the role of technology.
In this section, I apply a sociotechnical lens to a few systems -- prediction, search, recruitment, beauty standards, and policing -- and explore alternatives that don't just sustain the existing sociotechnical system but could transform it, leading to radically different futures.
your AI untangled book isn't available to subscribers, though you say it is available free. Your book's link only offers the option of purchasing. Thanks!
I have the same issue as SD Gottlieb