Remember playing the game whac-a-mole as a kid? You hit the mole, it returns to its hole, and then, in time, it pops back up, and on and on you go. That’s sometimes how it feels in the field of AI these days: we identify a harm or bias, take a whack at it by tweaking the model, and the harm goes away. For a for a time, anyway. But then eventually it returns, or a new mole pops up. What if we could transition from ‘minimizing harms’ to transforming the sociotechnical systems that continue propagating those harms?
We can, but doing so doesn’t start by making small changes to the training data or the model itself. It starts with three mindset shifts that I explore in my e-book, AI Untangled:
No technology — AI included — is objective or neutral. They shape and are shaped by social systems — the beliefs and biases of technologists, errant metaphors, cultural norms, narratives, existing structural inequities, etc. We must cultivate a mindset that views technology not as objective, but as ent…
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.