The Intelligence of a Hunch
What AI Will Never Have
Hi there,
Welcome back to Untangled. It’s written by me, Charley Johnson, and valued by members like you. Today, I’m writing about why so-called “world models” can’t save AI, and the intelligence of a hunch.
As always, please send me feedback on today’s post by replying to this email. I read and respond to every note.
🔦Untangled HQ
There is a lot going on y’all, so I thought I would break it down. For those keeping track, you can get involved in Untangled via:
Community
The Untangled Collective - join my community for tech & society leaders navigating technological change and changing systems.
Stewarding Complexity - join my private community with Aarn Wennekers for boards, CEOs, and organizational leaders who want to step outside formal governance structures, speak candidly with peers, and practice making sense of complexity — together.
The Facilitators’ Workshop - join my community with Kate Krontiris for rising and established leaders who want to perfect the craft of facilitating groups to achieve their purpose.
🚨 Courses
Cohort 6 of Systems Change for Tech & Society Leaders starts this Friday. Thursday is the last day to register.
Events
You can track all of the events across each community here.
The next event is this Wednesday at 11:00 am PT: The Four Mindsets Keeping Your System Stuck (and One That Will Change How You See It)
Leadership Coaching
I coach a limited number of leaders in stewarding change — in their organization, their system, and themselves. I currently have two openings — you can see what my clients have said about their experience and learn more here.
Okay, I’ve got a few announcements still in the hopper, but that’s enough of an update for one email.
On to the show!
🧶 Why World Models Won’t Save AI
The AI narrative is shifting again. What’s needed now are “world models.” LLMs aren’t sufficient by themselves to achieve artificial general intelligence because, as Google DeepMind Demis Hassabis recently put it “they just predict the next token based on statistical correlations.” He continued, they “don’t really know why A leads to B.” Glad we can finally put that mystery to rest!
Hassabis et al want you to believe that world models offer an answer to the structural limitations of large language models. Intelligence isn’t just a function of language, the thinking goes. Rather it’s something that emerges from interacting with an environment. For example, by training on video game data, robotics sensors data, etc., a world model can overcome the limitations of language and predict physical and social dynamics, and reflect embodied intelligence. But they too are falling for (or propagating?) the same fundamental falsehood that created an economy propped up by a bullshitting chatbot: that induction can lead to a kind of generalizable intelligence.
Induction is the process of gaining predictive ability by observing regularities in the world. It forms the backbone of modern machine learning—but it carries structural limitations that fundamentally constrain what AI systems can achieve. As David Hume recognized, induction requires us to assume that “instances of which we have had no experience resemble those of which we have had experience.” The sun rose yesterday and today, but it might not tomorrow — and much of the world is even less probable than the sun!
In his great book, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do, Erik Larson calls this the “empirical constraint” of AI: machine learning systems are doing all their pattern-detecting on past events. This makes them inherently backward-looking. Then when the future diverges from historical patterns or when novel situations arise, they break. LLMs promised intelligence by inhaling the Internet and predicting the next word in a sequence. World models are doing the same thing — take in videos, simulations, and sensor data, and extract regularities: “objects tend to fall downward,” “things behind obstacles are occluded but still exist,” “collisions cause changes in motion.”
So what we get from inductive inference is only provisional knowledge that offers no guarantee of truth. This creates what’s known as the long-tail problem: systems that struggle with exceptions, atypical observations, and unlikely events, precisely because these phenomena appear infrequently in training data. Yet exceptions and surprises are fundamental features of the real world, not edge cases to be engineered around. The system that has seen a thousand sunrises has no framework for understanding what it would mean if the sun didn’t rise—it simply lacks the data point.
Proponents of world models want you to believe that we’ve been going after the wrong data. LLMs are just predicting text in sequence, which doesn’t tell us anything new — which is why we need data with more dimensions, to help us understand physical and social dynamics in space. But that thinking still bumps up against the structural limitations of inductive systems. They are “tied inextricably to data and frequencies of phenomena in data,” Larson notes. They cannot perform the knowledge-based inferences necessary for intelligence—the kind of reasoning that requires understanding why things happen, not just that they happen.
Large language models, world models, and all future models that pretend to be the crucial, missing link, will be similarly constrained: induction is insufficient, and, here’s the kicker, AI doesn’t have a theory of abduction. Abduction represents a fundamentally different mode of reasoning than induction—one that gets closer to the essence of human intelligence. While induction moves from data to generalizations about regularity, abduction moves from the observation of a particular fact to a hypothesis that explains it. This is reasoning as detective work: we see facts and data as clues pointing toward underlying causes and meanings.
Abduction is the power of a hunch, a gut instinct, a curiosity. We perform this conjecture out of a background of effectively infinite possibilities, yet somehow converge on hypotheses that seem plausible or likely. To clarify the distinction, Larson offers a basic example of induction: “If its raining, the streets are wet. The streets are wet. Therefore, it’s raining.” But the street might be wet for a million other reasons — a nearby sprinkler of fire hydrant or … whatever! In abduction, we see a wet street and then make a guess based on contextual clues.
This capacity for hypothesis generation is not a luxury feature of intelligence—it is “the starting point for any intelligent thinking at all,” according to Larson. So where induction shows that something actually is happening, abduction suggests that it might be happening. The ‘might’ holds a lot of power in reasoning and intelligence — it’s the potentials and possibilities that trigger real-world thinking and original ideas.
The power of abduction lies in how it reconceptualizes observation itself. Rather than treating observations as neutral facts to be analyzed statistically, abduction views observed facts as signs embedded in a web of possibility—clues that point toward features of the world relevant to particular questions or problems. This shift is critical in rich cultural contexts where there are too many facts to analyze and only a few are relevant; abduction guides us toward what matters.
This same capacity allows us to undergo conceptual revolutions in how we understand our world: seeing new meaning in everyday happenings or recognizing that our entire theoretical framework needs replacement. These “mysterious and wonderful abductive inferences pervade human culture; they are largely what make us human.” They represent our ability to imaginatively leap beyond what the data directly shows to explanatory structures that illuminate why things are as they are.
If you’re wondering, yes, AI researchers have tried to quantify this capacity too. As Larson explains, A sub-field called knowledge representation and reasoning (KR&R) has largely failed to build common sense or the instinctive hunch into a model because, well, our implicit knowledge is vast! Hell, most of what we know remains unconscious until circumstances—surprise, confusion, deliberate reflection—require us to bring it into explicit awareness. This implicit knowledge base is unbelievably large in ordinary people, and any piece of it might prove necessary for some inference or other.
Abduction is what allows us to flexibly mobilize relevant knowledge from this vast store when confronted with novel situations. This is why imagination—the capacity for inferences that don’t exist in any dataset—fundamentally requires conjecture and abductive reasoning. “Abduction is inference that sits at the center of all intelligence,” according to Larson, because it is what allows us to go beyond pattern recognition toward genuine understanding, to generate new explanatory frameworks rather than simply apply old ones, and to reason creatively about possibilities rather than merely extrapolate from actualities. Without abduction, AI systems remain trapped in the prison of their training data, like sophisticated statisticians incapable of the imaginative leaps that characterize human thought.
Abduction is what enables imagination. It’s what leads to novel research and scientific breakthroughs. It’s what allows us to see that we’re walking along a path that is tethered to the past, and chart a new one. And critically, it’s what allows us to recognize when the categories we’re using to understand a problem are themselves part of the problem—when the frame needs to break, not just fill in. This is the work that systems change requires: not optimizing within existing logics, but sensing when those logics have reached their limits and something genuinely new must emerge. We’re in a moment that desperately needs imagination and curiosity — and it’s critical to remember that this is a capability that humans have, and machines will never possess.
👉 Before you go: 3 ways I can help
Advising: I help clients develop AI strategies that serve their future vision, craft policies that honor their values amid hard tradeoffs, and translate those ideas into lived organizational practice.
Courses & Trainings: Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change.
1:1 Leadership Coaching: I can help you facilitate change — in yourself, your organization, and the system you work within.



Brilliant piece. The distinction between induction and abduction is crucial and often overlooked in AI discourse. When people talk about "emergent abilities" in LLMs they're really just seeing better pattern matching, not the imaginative leap that abduction requires. The wet street example nails it - AI can recognize patterns but can't generate plausible hypotheses from infinite possiblities. That requires understanding context and meaning, not just correlations.
It's interesting how you connect the limitations of "world models" in AI with the inherent inteligence of human intuition, and I really appreciate you highlighting the importence of understanding this human aspect for the future of AI development.