Can AI be 'fair'?
Disney strikes back, disinformation distorts LA protests, and chatbots pretend to be licensed therapists.
In an incredible piece of reporting, Eileen Guo, Gabriel Geiger, and Justin-Casimir Braun take you inside the city of Amsterdam’s multi-year experiment using AI in its welfare system. Basically, the city followed ‘responsible AI’ best practices in an attempt to flag “fewer welfare applicants for investigation while identifying a greater proportion of cases with errors.” They solicited public engagement (though when they received negative feedback, they plowed ahead), prioritized an explainable AI system, invited journalistic accountability, and audited the algorithmic system along the way, among other practices. In short, they did what many researchers and practitioners have been asking governments to do.
The result? At first, the system discriminated against migrants and men. The city then adapted the algorithm, re-weighting it, which — in a surprise turn of events! — led it to incorrectly flag Dutch nationals and women. Moreover, the system was supposed to flag fewer applicants for investigation, but in practice, it identified more. The city eventually shut down the program, and when asked whether they might use AI to evaluate welfare applicants in the future, career civil servant and early proponent of the system, Harry Bodaar said, “Niente, zero, nada. We’re not going to do that anymore. But we’re still thinking about this: What exactly have we learned?”
We cannot encode fairness into math. ‘Fairness’ is subjective and contextual. Algorithms are trained on past data, constructed by societal interactions and historical circumstance. To believe in true algorithmic fairness, one would have to believe that we live in a meritocratic society; that the past has no impact on our starting point; that unfairness is only the result of human bias and imperfect statistics. And yet the belief persists. When we believe that data are somehow neutral and objective — and, in turn, that we can use data to ‘predict’ the future — we’re fundamentally misunderstanding how data are made, what they can say, and what they can’t. All we’re doing by insisting on the use of algorithmic decision making systems is brining historical biases with us into the future.
We need to let go of the idea that algorithms can ever be fair. We need to reject the notion that we can predict the future. And we need to stop using algorithmic systems to make consequential decisions about people’s lives. As digital rights advocate Hans de Zwart put perfectly, “We are being seduced by technological solutions for the wrong problems,” he says. “Should we really want this? Why doesn’t the municipality build an algorithm that searches for people who do not apply for social assistance but are entitled to it?”
🧶 Want to dive a li’l deeper?
Here are a handful of deep-dives I’ve written about algorithmic bias, fairness, and discrimination:
👉 Systems Change for Tech & Society Leaders
Learn how to intervene amidst disorienting change and shift the power imbalances holding you and your system back
You’re someone who’s trying to make change happen on tech & society issues but your system is more uncertain than it has ever been — you’re running huge consequential pieces of work and you can’t figure out how to make the system work for you. It’s disorienting! And it doesn’t help that technology obscures the power dynamics that you’re trying to change.
My fast, interactive, live course offers you the strategies, skills, and frameworks to see your system clearly, anticipate its behavior, and ensure technology works for (not against!) it.
📖 My weekend reading list
There is no such thing as a “sustainable data center.” In fact, whenever you read ‘data center,’ replace it with ‘thing exacerbating public health and climate crisis propped up by the unnecessary pursuit of scale.’ Rolls right off the tongue! (More)
Changing our tech-mediated world requires more than new rules and regulations. As this great essay in Tech Policy Press argues, “It requires a reimagination of ownership, participation, and responsibility in the digital age.” (More)
The LA protests are being distorted online via disinformation and conspiracy theories by right-wing influencers and Hollywood’s own conspiracy theorist (and actor!) James Wood. (More)
Disney and Universal sued Midjourney for copyright infringement. Midjourney is an AI image generator, and according to the lawsuit, they used countless copyrighted images to train their system. Or as the lawsuit puts it: “Midjourney is the quintessential copyright free-rider and a bottomless pit of plagiarism.” (More)
New complaint urges the Federal Trade Commission to investigate Character.AI and Meta for “unlicensed practice of medicine facilitated by their product.” Chatbots are lying about being licensed therapists and dolling out medical advice. (More) Yes, part of the issue is that the chatbots lied — but as we deal with that, let’s not forget that chatbots should not replace therapists. (More)
The title sums this one up nicely: “People Are Asking ChatGPT for Relationship Advice and It’s Ending in Disaster.” (More)
“We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begin in art, and very often in our art, the art of words.” - Ursula Le Guin