Rethinking Goals in an Uncertain World
Why relationships—not rigid targets—are the key to meaningful systems change.
📖 Weekly Reads
Techno-Religion: Cade Metz delves into the belief systems that compose Silicon Valley’s new religion. In it, Harvard Chaplain Greg Epstein explains the connection between technology and religion this way: “What do cultish and fundamentalist religions often do? They get people to ignore their common sense about problems in the here and now in order to focus their attention on some fantastical future.” Dive deeper by listening to my conversation with Epstein on his book Tech Agnostic or reading my essay on tech and religion, “Transcend your meat prison.”
Systems & Uncertainty: Vaughn Tan maps different types of not-knowing (i.e. uncertainty and risk) in a useful visual. Dive deeper by listening to my conversation with Vaughn on putting meaning-making and uncertainty at the center of AI product development, or reading my essay on the topic, “It’s okay to not know the answer.”
Speculation & AI: What happened when someone gave ChatGPT $100 and let it invest in the stock market for a month? I expect there will be an uptick in this use case. When there is, situate it in the broader trend of speculating on uncertainty, which I’ve explored here and here.
‘Sycophantic AI’:The New York Times has an in depth story of how one man and a chatbot “went down a hallucinatory rabbit hole together, and how he escaped.” Don’t delude yourself!
Generative AI & Medicine: What happens when a confident sounding machine is wrong about a diagnosis? Or makes up a body part with assuredness? Read this piece from The Verge to understand what’s at stake in mixing generative AI and medicine.
Systems Innovation: I’ve been re-reading a 12 year old report on Systems Innovation from
that offers a nice 2x2 framework for mapping systems change strategies. The defining variables that inform whether you pursue a distributed, collaborative, emergent, or control strategy? Knowledge and power! If you’re thinking, ‘hold on, knowledge is power!’ then pause, and first read my piece, Intelligence not the same thing as power.
Rethinking Goals in an Uncertain World
Two years ago, I argued that ‘AI alignment’ isn’t a problem, it’s a myth. ‘The alignment problem’ is premised on the belief that we can align AI with our needs and wants. But our needs and wants conflict, and we can’t predict how AI-human interaction will play out in complex systems. ‘Alignment’ is a foolish goal in an uncertain world. But the piece raised the question: if alignment is the wrong goal, what’s the right one?
The answer lies in rethinking goals altogether. One issue with goals? They encode power. Ask yourself: in the system you’re trying to change, who decided what success looks like? In civil society, it’s often bosses or funders. In the private sector, it’s investors, shareholders, and market pressures pushing for short-term returns (I explore an alternative vision in “The Myth of the Invisible Hand”). The issue isn’t that these stakeholders only act in self-interest, but that their goals are shaped—and limited—by their incentives and imaginations. As Indy Johar writes, “The very act of goal-setting can erase plurality, nuance, and emergent potential.”
Goals pose a challenge in any setting, but they become a real liability in complex, uncertain systems. As Indy Johar puts it, goals can “collapse diversity…and leave systems brittle in the face of uncertainty.” You set a target assuming stability — then the system shifts, and the goal no longer fits. But power dynamics often force you to keep chasing it anyway! Or your intervention misfires because the system reacts in ways you didn’t foresee, and now every stakeholder is scrambling to adjust. But effective adaptation requires the ability to sense, interpret, and respond — not stick to a fixed plan. Targets, goals, and optimization-based technologies ask for certainty and control. Complex systems demand flexibility and continuous learning.
This doesn’t mean we should abandon goals — but we do need to rethink their role. Goals shouldn’t be fixed endpoints or static measures of success. They should be “provisional, situated, and subject to continual renegotiation as the system evolves,” as Johar explains. In other words: use goals to orient, not to define the destination. Hold them lightly. Let them go when they limit your ability to sense, adapt, and respond. If traditional targets no longer capture what successful systems change looks like — what should take their place?
Instead of targets, Johar suggests we focus on a system’s relational maturity — its ability to “host diversity without fragmentation, to reconfigure in response to feedback, and to generate evolving coherence without external command.” In systems change, it’s common to say: change the relationships, change the system. Johar goes further: the success and health of a system isn’t just built on relationships — it evolves through them. Our capacity to navigate difference, adapt amid conflict, and reach shared understanding through dialogue is what allows a system to hold multiple versions of good at once — and “maintain coherence” even as it changes or diverges.
The outcomes of strong relational systems aren’t engineered from the outside — they emerge from within. As Johar puts it, they arise as “coherences of sense, orientation, and action,” a process he calls emergent cohesion. Where AI alignment assumes we can predict and control complex systems, emergent cohesion asks us to build systems that can evolve through trust, feedback, and shared meaning. If we truly care about systems change, we need to stop centering technology — and start treating it as a tool in service of our relationships.
The AI Future No One Is Talking About
Most AI conversations today focus on scale, speed, and disruption. But the more capable AI becomes, the more urgent the deeper questions become:
What kinds of minds are we building? What kinds of people are we becoming? And who gets to decide?
The Artificiality Summit (Oct 23–25, Bend, OR) brings together thinkers, makers, artists, and scientists to explore these human questions. It’s a gathering for those who believe the future of AI is also the future of meaning, identity, and trust.
Join the conversation with people like:
Blaise Agüera y Arcas, Google
Jonathan Feinstein, Yale University
Ellie Pavlick, Brown University | Google DeepMind
Maggie Jackson, Author
Adam Cutler, IBM
Use promo code “UNTANGLED” for 10% off the ticket price at artificialityinstitute.org/summit.