When AI Joins the Meeting: Rethinking How We Decide Together
PLUS: Tech's Faustian bargain, Google takes an 'L', and the rise of AI 'workslop'
đ Weekly Reads
Google Loses: An Indianapolis community just fought Googleâs proposed data center â and won. Google withdrew its plan to rezone 468 acres of farmland in Franklin Township after pushback from residents and the city council. In exchange for a massive tax break, the company would have saddled the community with the risks: higher electricity bills and potential water contamination, both of which data centers are notorious for. A new research paper shows just how steep the energy demands of AI can get: for text-to-video generators, doubling the length of a video quadruples the energy use. That kind of non-linear scaling doesnât just strain infrastructure â it drives up costs for everyone else on the grid. Right, what might looks like âprogressâ in the abstract often comes at very real local costs.
The rise of AI âworkslop.â Your âspeedâ and âefficiencyâ may actually be your colleagueâs burden. Generative AI is changing how work gets done â and not always for the better. A new study finds that âAI workslopâ is creeping into offices everywhere. Hereâs the pattern: someone uses AI to churn out a sloppy, half-baked draft, then hands it off for you to fix or redo. Sound familiar? Youâre not alone. Forty percent of full-time employees reported experiencing this in the past month. Most of that slop gets passed laterally to peers (40%), but it also flows upward â with 18% of managers receiving it from direct reports. The researchers put it bluntly: âWorkslop may feel effortless to create but exacts a toll on the organization. What a sender perceives as a loophole becomes a hole the recipient needs to dig out of.â
Techâs Faustian bargain: In a great long-form essay for Wired, Steven Levy traces how Silicon Valley moved from its countercultural, egalitarian beginnings to oligarchic alignment with Trump. The idealism of early leaders like Steve Jobs and Mitch Kapor gave way to massive wealth accumulation, political entanglements, and companies that now resemble the very institutions they once sought to disrupt. By the mid-2020s, this shift crystallized in Silicon Valleyâs uneasy alliance with Donald Trump. Many tech elites, resentful of Biden-era regulation on antitrust, crypto, and AI, found Trumpâs deregulation promises and transactional style appealing â democratic values be damned! Leaders convinced themselves this was savvy realpolitik, but in practice it has meant ceding influence to a president who extracts loyalty and concessions while concentrating power for himself. As Levy explains, the result is a Valley that once framed itself as David against Goliath now behaving like Icarusâflying too close to the sun, complicit in authoritarian politics, and undermining the egalitarian ideals on which it was founded.
âGet the beatâŠlisten to the wisdom of systems.â Donella Meadows
đ§¶When AI Joins the Meeting
Every organizational leader is asking some version of this question: How might I embed generative AI into group decision-making processes?
The first step is to ask whether, uh, you should invite AI into key decisions at all. The current paradigm of scale-at-all-costs generative AI means that adoption and use is complicit with the environmental costs of data centers, the psycho-emotional costs to data labelers, and the uncompensated theft of the worldâs data. It also means that youâre inviting into key decisions a tool that â based on how its designed â offers agreeable, sycophantic answers, and makes stuff up. There are plenty of reasons to âjust say no.â
But Iâm also not naive to the competitive pressures to adopt LLMs â even if they donât know exactly how to make use of the tool, companies donât want to fall behind. Fall behind what, you might ask? Right now, itâs not revenue or novel use cases, itâs falling behind in terms of perception and legitimacy. In either case, many people have decided to incorporate LLMs into their group decision making processes, and Iâd like to help folks do that in a way that doesnât go totally sideways.
The first step is recognizing the kind of system youâre in. Many organizations assume they operate in a complicated albeit predictable environment â one where cause-and-effect relationships hold, and change is linear. In such settings, delegation frameworks like MOCHA or RACI work well because decisions can be clarified, handed down, and tracked within a centralized accountability model. But this approach assumes information flows cleanly through a hierarchy and that the system behaves predictably.
The problem? Most organizations donât actually live in that world. They operate in complex environments marked by uncertainty, adaptation, and emergent patterns. In this context, delegation can become a liability. Centralized decision-makers are often too removed from frontline relationships to see local context, detect subtle shifts, or connect emerging patterns. By the time information filters upward, it may already be outdated or distorted. As Dave Snowden explains, thatâs why complexity calls for distributed decision-making: empowering people to act on local information within clear constraints, while staying connected through feedback loops across the system.
The second step is to distinguish between what AI can and canât do. Vaughn Tan captures this well: AI excels at information processing, pattern recognition, and rapid analysis â but it cannot make meaning. It cannot reflect, exercise moral imagination, or grapple with the human complexities of trust, doubt, or long-term vision. Good luck using a generative AI tool to imagine a future state of your organization, distill interpersonal complexities of a key partner relationship, or reflect on its own doubt about the path ahead. The point is, AI outputs can serve as valuable inputs to group decision-making, but they should never determine the decision itself.
This distinction matters because the line between human and AI contributions is becoming blurry. Research from The Artificiality Institute shows how peopleâs thinking and identity can blend with AI systems â a phenomenon authors Helen Edwards and Dave Edwards call âcognitive permeabilityâ and âidentity coupling.â In practice, who and/or what is making the decision is becoming hard to untangle.
So what can leaders do? Part of the answer is scaffolding human thinking. Tan has prototyped a tool that use structured prompts to help students clarify what kinds of reasoning they must do themselves versus what kinds of tasks AI might support. The sequence of prompts ensures people are building on their own reflections while keeping the division of labor between human and machine visible.
The other part is building shared meaning frameworks at the group level. Different assumptions about what AI is â a probabilistic text generator versus a general intelligence â lead to very different practices for incorporating its outputs. Groups that see hallucinations as a bug (psst, theyâre not!) will treat them differently than groups that treat hallucinations as a feature. Without alignment amongst the group, AI risks slipping into a de facto decision-making role. With alignment, groups can design processes that integrate AIâs strengths while keeping human judgment at the center.
Done right, according to The Artificiality Instituteâs research, this balance can improve group deliberation and decision-making. But the key is intentional design: scaffolding critical thinking, clarifying roles, and creating sociotechnical frameworks that make the human/AI division of labor explicit.
At the end of the day, the question isnât whether to hand decisions over to AI â you shouldnât. The real question is whether you can build the practices, guardrails, and shared language that keep human judgment at the center while constraining the use of these tools to what they do well. If you get that balance right, you donât just avoid the risks of letting probabilistic text generators steer your strategy. You also open up the possibility of something better: decision-making that is more reflective, more context-aware, and more resilient because it combines what machines do well with the distinctly human work of meaning-making.
Before you go: 3 ways I can help
Systems Change for Tech & Society Leaders - Everything you need to cut through the tech-hype and implement strategies that catalyze true systems change.
Need 1:1 help aligning technology with your vision of the future, not the other way around? Apply for advising & executive coaching here.
Organizational Support: Your organizational playbook for navigating uncertainty and making sense of AI â whatâs real, whatâs noise, and how it should (or shouldnât) shape your system.
P.S. If you have a question about this post (or anything related to tech & systems change), reply to this email and let me know!


