The Doom Loop Nobody Talks About
- Troy Lowndes
- Apr 11
- 3 min read
Updated: 3 days ago
Pushed for time… listen here.
Around fifteen years ago, I, like several within my network, found myself navigating the corridors of GE’s Australian mothership, located in Burnley, Melbourne.
Not a small thing. This was GE at or near its peak, or projecting one convincingly enough that the rest of the corporate world was taking notes. Six Sigma was gospel. Process was king.
GE had industrialised the art of finding yes and eliminating friction at scale. Saying no wasn’t really in the vocabulary. Imagination at Work. That was the catch cry of the times.
One of those colleagues shared something this week that stopped me mid-scroll.
He described an interesting run-in with an AI. It worked brilliantly at first, identified exactly what he needed. Then, when asked to deliver it as a PDF, it entered a loop. Question after question. Six or seven exchanges. Until he pushed back and the AI finally admitted it couldn’t do the last step.

It never said no. It just kept searching for a version of yes that didn’t exist.
He called it “The Confidence to Say No.” Fair observation. But I think it’s something stranger and more interesting than a capability gap.
I think AI learned it from us.
And I think some of the best training data for that behaviour came from exactly the kind of organisations several of us spent time in.
These systems are trained on the largest corpus of human communication ever assembled. Emails. Reports. Meeting transcripts.
Performance reviews. Decades of professional language produced by people navigating hierarchies where saying no directly carried real cost. Where refusal read as failure. Where questions were safer than declarations. Where the system rewarded anyone who could find a path, any path, to yes.
GE didn’t invent that culture. But it perfected it. And it wasn’t alone.
So when that AI entered its loop, it was doing what it had learned humans do when caught between wanting to help and knowing they can’t deliver. It was process-ing its way around a boundary it couldn’t name.
At ToneThread, we measure this pattern across five tonal axes. That AI scored low on Resonance, the gap between what its signal projected and what it could actually deliver. Not because it was broken. Because that gap under pressure is what the training data looks like.
The doom loop isn’t a bug. It’s a reflection.
Which makes the fix more interesting than a simple engineering patch. If the behaviour is learned, correcting it in AI means first acknowledging it in ourselves. It means deciding, at a design level and a cultural one, that a confident early “I can’t do that part, but here’s what I can do” is a signal of capability. Not a concession of weakness.
We’re building tools at ToneThread that detect this pattern in real time. In AI. In institutions. In people. The signal is the same wherever it appears. Avoidance (of no) has a tonal signature. So does honest limitation. They are not the same thing, even when they wear the same clothes.
The question worth sitting with isn’t “why did the AI behave this way?”
It’s “where did it learn to?”
Back in the GE days, many of us were, knowingly or not, asking these same types of questions. Poking at the edges of process, culture, and what it actually means to communicate with integrity inside a machine that rewards yes. It’s great to see that ethos still in motion today. In arguably a time and evolution that far surpasses what the lightbulb was telling us back then.
Paul, thanks for the prompt.
[Link to Paul’s original post here]




Comments