This week, the United Nations ‘AI for Good’ summit took place in Geneva — and I’d like the organizer’s to change its name, and for all of us to drop it from our lexicon.
‘AI for’ builds ‘AI’ into the premise of whatever solution or program is on offer. If we want real change, we must envision the world that we want and then map backward to determine whether/how to include AI at all. In ‘How do we work our way towards utopia,’ I told the story of 47 AI pioneers, all of whom were white men, and the vast majority of whom were expert in math and computers, who came together in the summer of 1956 to discuss the future of AI at Dartmouth College. We’re still living in the world they imagined — and that’s a big problem — because it rests on faulty premises: humans can be modeled like computers and we can use applied math to predict the outcomes of their decisions.
None of them were thinking about how the technology would interact with complex social systems. As I wrote in that piece, they made the crucial mistake of starting with “the technology and then imagined what the world might look like with that technology in it.” This is the same mistake the ‘AI for good’ summit is making; and it’s the same mistake every generation of ‘tech for good’ enthusiasts make. Over the last 20+ years, I’ve watched as “mobiles for good,” “drones for good” and “blockchains for good”, among other groups, inevitably frame whatever problem they are trying to solve through the lens of technology. And then, a few years later, they bemoan the ‘unintended consequences’ that resulted from woefully minimizing the complexity of the system they were trying to change. When determining what about your system needs to change, you have to start with outcomes (e.g. social, interpersonal, economic, political, etc.) and work backward as you consider the role of technology. (Untangled Deep Dive) “AI for” turns ‘AI’ into a mechanism to realize these outcomes before one has determined what needs to change in the first place.
‘For good’ is so broad and vague that it allows whomever is proposing the program or solution to define it for themselves. (Untangled Deep Dive) Indeed, the conference hosts threatened to censor Abeba Birhane‘s keynote because one of her slides advocated for what I’d consider an obvious, basic position: “No AI for War Crimes.” But the organizers from the International Telecommunications Union saw it differently. This example is chilling and absurd, and should, as Birhane put it, prompt the organizers to “either rethink if their idea of ‘good’ continues to be defined by those who stand to gain from selling products or the general public and civil society and rights groups who tend to pay heavy price when AI technologies go wrong (and they always do).” Absent specificity and choices that confront real tradeoffs, ‘for good’ just becomes a self-justifying refrain for those in power to maintain it.
Don’t let tech hype hide what really needs to change - cut through it. My course starts next weekend, and, as one alum said “It’ll beat any other self-investment you could make this year.”
The conference hosts get to decide what ‘for good’ looks like. Those promoting a new tech initiative get to decide what ‘for good’ looks like. But what about those impacted by the technology, or those who are a key part of the system that needs changing? This is bad if you believe in inclusion, diversity, and participatory processes — and it is also counterproductive to any change you’re trying to create. True systems change efforts require ‘getting the system’ in the room. This process isn’t a light-hearted question of audience; it’s about strategically involving multiple diverse perspectives to see your system clearly and anticipate the potential impacts of technology; it’s about whose perspective and experience counts and whose vision of the future matters. ‘For good’ replaces these choices with a self-justifying statement, and in so doing, hands over the future to a few voices, who — let’s be real — often reaffirm the existing power imbalances in the system. Not unlike the 47 white guys at Dartmouth 69 years ago.
I get it. You work in technology and you want to ‘do good’ in the world. But you don’t get to decide what ‘good’ looks like, you have to care more about the system you’re trying to change than the technology, and you have to be open to the possibility that your services aren’t required — because they might not be.
Thanks for this! I assumed that the term “AI for good” referred to a framework on whether and how to integrate AI into existing systems. That it would come to play only after it had been determined it was needed. With your article, I’m curious to understand it from other perspectives, including those stakeholders who, as you say, would be impacted by AI integration.