When LLM users describe their experience with their chatbots, the results are so divergent that it can sound like they're describing two completely different products.
--
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
pluralistic.net/2025/08/16/jac…
1/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Previously, I've hypothesized that this is because there are two distinct groups of *users*: "centaurs" (people who are assisted by a machine - in this case, people who get to decide when, whether and how to integrate an LLM into their work)...
2/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
And "reverse-centaurs" (people conscripted into being an assistant *to* a machine - here, people whose bosses have fired their colleagues and ordered the survivors to oversee an LLM that badly approximates the work of those departed workers):
pluralistic.net/2025/08/04/bad…
But yesterday, I read "The Futzing Fraction," an essay by Glyph, that advances a compatible, but very different hypothesis that I find extremely compelling:
blog.glyph.im/2025/08/futzing-…
3/
Deciphering Glyph :: The Futzing Fraction
blog.glyph.imCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Glyph proposes that many LLM-assisted programmers who speak highly of the reliability and value of AI tools are falling prey to two cognitive biases:
1. The "availability heuristic" (striking things are easier to remember, which is why we remember the very rare instances of kids being kidnapped and killed, but rarely think about the relatively common phenomenon of kids dying in boring car-crashes); and
4/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
2. The "salience heuristic" (*big* things are easier to remember, which is why we double-check that the oven is turned off and the smoke alarms are working after our neighbor's house burns down).
In the case of LLM coding assistants, this manifests as an unconscious overestimation of how often the LLM saves you time.
5/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
That's because a coding program that produces a bug that you have to "futz with" for a while before it starts working is normal, and thus unmemorable, while a coding tool that turns a plain-language prompt into a working computer program is *amazing*, so it stands out in your memory.
Glyph likens this to a slot-machine: when you lose a dollar to a slot-machine, that is totally unremarkable, "the expected outcome."
6/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But when a slot pays out a jackpot, you remember that for the rest of your life. Walk through a casino floor on which a player hits a slot jackpot, and the ringing bells, flashing lights, and cheering crowd will stick with you, giving you an enduring perception that slot-machines are paying out all the time, even though no casino could stay in business if this were the case.
7/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Glyph develops this analogy to describe why LLMs are *worse* than slot machines. He says that (non-pathological) gamblers set a budget for the amount of money they're prepared to lose to the slots, while a coder who's feeling warmly disposed to an LLM coding assistant may not put any explicit limits on how much time they'll spend futzing with LLM-generated code,
8/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
(I'll add that part of the seductive joy of coding is that it can induce a kind of autohyptnotic fugue state where you don't notice the passing of time, this is also a feature of pathological gambling.)
Glyph poses a hypothetical: if you have a coding project and you ask a chatbot to write, and the resulting code initially doesn't work, but *does* work after ten minutes of futzing, that feels *amazing* and you'll remember it forever as the time you saved 3:50 with a chatbot.
9/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But it's possible that you repeated the "well, I'll just futz with this for ten minutes" step to get to that final success so many times that the whole affair took *six* hours, two hours longer than it would have taken had you just written the program from scratch. It's like winning a $1000 jackpot after "just putting a dollar in," except that that was the one-thousand-and-first dollar that you fed to the machine.
10/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Glyph says that in other business activities, the "let's just try this for 10 minutes more" strategy usually pays off, but that LLMs produce an "an emotionally variable, intermittent reward schedule" that subverts your ability to wisely deploy that tactic.
But that's not the only way in which an LLM coding assistant is like a slot machine. Reg Braithwaite proposed that AI companies' business model is also like a casino's, because they charge every time you re-prompt the AI.
11/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
He writes:
> When you are paying by the "pull of the handle," the vendor's incentive is not to solve your problem with a single pull, but to give the appearance of progress towards solving your problem.
social.bau-ha.us/@raganwald/11…
Jpeck likens the use of an LLM coding assistant to "a dense intern" who has to be walked through each step and then have their work double-checked:
universeodon.com/@boscoandpeck…
12/
JPeck (@boscoandpeck@universeodon.com)
Universeodon Social MediaReg Braithwaite 🍓
2025-08-15 14:31:31
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
There's a big difference between an intern and an LLM. For a senior coder, helping interns is an investment in nurturing a new generation of talented colleagues. For a reverse-centaur, refining LLMs is an investment in fixing bugs in a product designed to put you on the breadline (if you believe AI companies' claims that their products will improve until they don't need close supervision), or it's a wasted investment in a "dense intern" who is incapable of improving.
13/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Image:
Cryteria (modified)
commons.wikimedia.org/wiki/Fil…
CC BY 3.0
creativecommons.org/licenses/b…
--
Frank Schwichtenberg (modified)
commons.wikimedia.org/wiki/Fil…
CC BY 4.0
creativecommons.org/licenses/b…
eof/
File:Anonymous – CeBIT 2016 00.jpg - Wikimedia Commons
commons.wikimedia.orgCavyherd
in reply to Cory Doctorow • • •Cavyherd
in reply to Cory Doctorow • • •"autohyptnotic fugue state where you don't notice the passing of time"
This is also why texting while driving is so dangerous: Focusing on composing your text short-circuits your sense of duration, so subjectively no time has passed, whereas out in the Real World, that semi has had time to close the mile between when you first noticed it & the time you look up & it's right on top of you in the intersection.
dendrite
in reply to Cory Doctorow • • •Sensitive content
Cory Doctorow reshared this.
Tane Piper ⁂
in reply to Cory Doctorow • • •