The algorithm gave the order. The nurse said no.
What Adam Hart knew that the algorithm didn't.
Hi there,
Welcome back to Untangled. It’s written by me, Charley Johnson, and supported by members like you. Today, I’ve done something a li’l different: I used my STEWARD framework to analyze this incredible piece of journalism about how nurses are navigating the adoption of AI. It’s ultimately a story about a workflow that was never designed to protect the judgment it depended on.
As always, please send me feedback on today’s post by replying to this email. I read and respond to every note.
On to the show!
🏡 Untangled HQ
This Week:
Stewarding AI: I launched my new course, “Stewarding AI: How to Build Responsible Policies, Workflows, and Practices.” Be the first to hear when it’s open for enrollment.
Untangled Collective: I hosted Untangled’s March community event on practical approaches to world building and mapping backward. Next month’s event is all about mapping power.
Coming Up:
Stewarding Complexity: Bad news: the stable, predictable world your governance frameworks were built for doesn’t exist anymore. Good news: we’re figuring out what to do about it—together. Join Aarn and I this Tuesday for our next event on Stewarding Complexity.
The Facilitators’ Workshop: Every team has a nay-sayer, a bomb-thrower, and someone who’d rather let tension quietly fester than have a five-minute uncomfortable conversation. This free one-hour session is for everyone who has to work with them.
Systems Change: Tomorrow I’m opening enrollment to the waitlist for Cohort 7 of Systems Change for Tech and Society Leaders. Those already on the waitlist will get 40% off the new price ($1200) if they sign up by March 13 -- so ... get on the waitlist today!
🧶 Deep Dive
I want to tell you a story about a nurse named Adam Hart.
Hilke Schellmann — journalist and author of The Algorithm — has been investigating how AI is reshaping work across industries. In her reporting on what’s actually happening on hospital floors, she follows Hart to the bedside of a patient flagged by a sepsis alert. An algorithm generated an order. His charge nurse told him to comply. Hart refused. He had noticed something the algorithm couldn’t: a dialysis catheter. Administering fluids would have harmed the patient. He also had “that feeling” — a somatic, non-algorithmic form of knowing built over years of watching bodies decline and recover.
This isn’t a story about one difficult charge nurse. It’s a story about a workflow that was never designed to protect the judgment it depended on. I’ve spent the last year developing a seven-step framework called STEWARD for exactly this situation. Not just in hospitals, but in any organization deploying AI into complex human work. Let me walk you through what Schellmann’s reporting reveals, step by step.
S — See the System
The hospitals in Schellmann’s reporting mostly skipped this step entirely. They asked, “What can we automate?” rather than, “What future are we trying to create, and does this tool help or hinder it?” The sepsis alert and the BioButton arrived with hype-laden launch narratives — UC Davis called BioButton “transformational,” and the industry promised continuous, around-the-clock monitoring that no human clinician could match — which substituted for genuine systems analysis. Nobody mapped how the alert would reshape authority on the floor, or the choreography between nurses and physicians. They bolted the tool on and called it innovation.
T — Trace What Must Stay Human
This is the heart of Hart’s story. His refusal wasn’t defiance — it was the exercise of exactly the kind of judgment STEWARD says must remain human. The “feeling” Hart and his colleague Beebe describe — documented carefully by Schellmann — isn’t soft. It’s non-delegable. It exists nowhere in any training dataset. It develops through years of getting things wrong, reading skin, and noticing how a patient’s breathing changes between 3am and 4am.
Hospitals that deployed these tools without first asking “what must stay human?” created conditions where those capacities were treated as obstacles rather than assets.
E — Evaluate New Risks, Accountability Shifts & Loss
When the sepsis alert generated an order, it quietly became a verdict. The algorithm redirected authority, compressed deliberation, and placed the burden of proof on the clinician who disagreed. When something went wrong, responsibility was impossible to locate but nurses absorbed the consequences either way. This is the “problem of many hands” — and it’s not an accident of bad design. It’s what happens when you deploy AI without explicitly asking who is accountable when the machine is wrong.
W — Workflow Redesign
The sepsis protocol is the clearest failure in Schellmann’s reporting. The workflow wasn’t redesigned when the AI arrived — it was just augmented. Alert fires → charge nurse issues order → bedside nurse complies. No mandatory interpretation moment. No named override authority. No procedure for when the alert is wrong. STEWARD asks: who interprets this output before anything moves? What triggers a legitimate override? None of those questions appear to have been answered before deployment. The result was a policy built on a model that substantially underperformed its marketing, with nurses left holding the consequences.
A — Adjust Interfaces & Tempo
The BioButton story Schellmann documents is interface design gone wrong in almost every way. The alerts were vague, frequent, and hard to interpret. They pulled nurses away from patients already flagged as high-risk to investigate false positives. As one nurse explained,
“I have my own internal alerts—‘something’s wrong with this patient, I want to keep an eye on them’—and then the BioButton would have its own thing going on. It was overdoing it but not really giving great information.”
The interface encoded urgency without specificity, training nurses to respond to pings rather than to patients. STEWARD asks: does this interface support critical thinking or passive compliance? Is uncertainty visible? Are alternatives presented? In almost every case Schellmann describes, the answer is no. The alert arrives with apparent authority and no explanation. As one researcher she spoke with put it, AI models should be able to explain why they’re recommending something.
R — Review What the System Teaches
The nurses witness, in real time, what these systems teach. They’re teaching clinicians to wait for an alert before acting. They’re teaching deference — “the AI said so” as a dead end of clinical reasoning rather than a starting point. One system’s alerts are estimated to be false positives half the time, yet policy requires a response to every one. Over time, this erodes exactly the clinical instinct the story celebrates. If the next generation of nurses learns to wait for an alert, the “feeling” Beebe and Hart describe stops developing. That’s the most dangerous long-term cost, and the one least visible in any ROI calculation.
D — Detect Drift & Design Corrective Moves
The drift Schellmann documents is already institutional. What began as a tool to support clinical judgment has, in some places, become a substitute for it. The nurses see it. They’re staging demonstrations, testifying before city councils, and going on strike — not because they’re anti-technology, but because they’re watching the root system get plowed under in real time.
UC Davis eventually stopped using BioButton because nurses were catching deterioration faster than the device was. That’s a monitoring system working. The problem is that it took a year, a pilot, and significant friction to get there.
Schellmann closes her piece with a line worth sitting with: “the ultimate value of the nurse in the age of AI may be not their ability to follow the prompt but their willingness to override it.”
STEWARD would add one thing: organizations need to be designed so that overriding is easy, expected, documented, and studied — not treated as defiance.
The fact that Adam Hart’s refusal landed as defiance isn’t a personnel problem. It’s a systems problem. And it’s entirely preventable.
I sent Schellmann a draft of this in advance to get her feedback. She had these kind words to say: “What a powerful way to use reporting to better real-world practices - I am so thrilled this work can be utilized to not only show what we need to work on, but also exactly how to implement AI responsibly in organizations.” This is the first time I’ve used my frameworks to map real world case studies and share it as a newsletter -- what did you think? Please respond to this poll or hit ‘Reply’ and let me know. Thanks!
⚒️ Systems Change
Think your organization has a healthy relationship with technology? There’s a diagnostic for that -- and it’s free!
💫 Work With Me
Here are 4 ways I can help:
Facilitation: I can help facilitate your team through complex and fraught dynamics, so that they can achieve their purpose.
Advising: I can help you navigate uncertainty, make sense of AI, and facilitate change in your system.
Organizational Training: Everything you and your team need to cut through the tech-hype and implement strategies that catalyze true systems change. (For either Stewarding AI or Systems Change for Tech & Society Leaders)
1:1 Leadership Coaching: I can help you facilitate change — in yourself, your organization, and the system you work within.


