It's easy to understand why some programmers love their AI assistants and others loathe them: the former group get to decide how and when they use AI tools, while the latter has AI forced upon them by bosses who hope to fire their colleagues and increase their workload.
--
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
pluralistic.net/2025/08/04/bad…
1/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Formally, the first group are "centaurs" (people assisted by machines) and the latter are "reverse-centaurs" (people conscripted into helping machines):
pluralistic.net/2025/05/27/ran…
Most workers have parts of their jobs they would happily automate away. I know a programmer who uses AI to take a first pass at CSS code for formatted output. This is a notoriously tedious chore. It's not hard to determine whether the AI got it right - just eyeball the output in a variety of browsers.
2/
Pluralistic: AI turns Amazon coders into Amazon warehouse workers (27 May 2025) – Pluralistic: Daily links from Cory Doctorow
pluralistic.netCory Doctorow
in reply to Cory Doctorow • • •Sensitive content
If this was a chore you hated doing and someone gave you an effective tool to automate it, that would be cause for celebration. What's more, if you learned that this was only reliable for a subset of cases, you could confine your use of the AI to those cases.
Likewise, many workers dream of doing something through automation that is *so* expensive or labor-intensive that they can't possibly do it.
3/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
I'm thinking here of the film editor who extolled the virtues to me of deepfaking the eyelines of every extra in a crowd scene, which lets them change the focus of the whole scene without reassembling a couple hundred extras, rebuilding the set, etc. This is a brand new capability that increases the creative flexibility of that worker, and no wonder they love it. It's good to be a centaur!
4/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Then there's the poor reverse-centaurs. These are workers whose bosses have saddled them with a literally impossible workload and handed them an AI tool. Maybe they've been ordered to use the tool, or maybe they've been ordered to complete the job (or else) by a boss who was suggestively waggling their eyebrows at the AI tool while giving the order.
5/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Think of the freelance writer whom Hearst tasked with singlehandedly producing an entire, 64-page "best-of" summer supplement, including multiple best-of lists, who was globally humiliated when his "best books of the summer" list was chock full of imaginary books that the AI "hallucinated":
404media.co/viral-ai-generated…
No one seriously believes that this guy could have written and fact-checked all that material by himself.
6/
Viral AI-Generated Summer Guide Printed by Chicago Sun-Times Was Made by Magazine Giant Hearst
Jason Koebler (404 Media)Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Nominally, he was tasked with serving as the "human in the loop" who validated the AI's output. In reality, he was the AI's fall-guy, what Dan Davies calls an "accountability sink," who absorbed the blame for the inevitable errors that arise when an employer demands that a single human sign off on the products of an error-prone automated system that operates at machine speeds.
It's never fun to be a reverse centaur, but it's especially taxing to be a reverse centaur for an AI.
7/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
AIs, after all, are statistical guessing programs that infer the most plausible next word based on the words that came before. Sometimes this goes badly and obviously awry, like when the AI tells you to put glue or gravel on your pizza. But more often, AI's errors are precisely, expensively calculated to blend in perfectly with the scenery.
8/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
AIs are conservative. They can only output a version of the future that is predicted by the past, proceeding on a smooth, unbroken line from the way things were to the way they are presumed to be. But reality isn't smooth, it's lumpy and discontinuous.
Take the names of common code libraries: these follow naming conventions that make it easy to predict what a library for a given function will be, and to guess what a given library does based on its name.
9/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But humans are messy and reality is lumpy, so these conventions are imperfectly followed. All the text-parsing libraries for a programming language may look like this: docx.text.parsing; txt.text.parsing, md.text.parsing, except for one, which defies convention by being named text.odt.parsing. Maybe someone had a brainfart and misnamed the library. Maybe the library was developed independently of everyone else's libraries and later merged. Maybe Mercury is in retrograde.
10/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Whatever the reason, the world contains many of these imperfections.
Ask an LLM to write you some software and it will "hallucinate" (that is, extrapolate) libraries that don't exist, because it will assume that all text-parsing libraries follow the convention. It will assume that the library for parsing odt files is called "odt.text.parsing," and it will put a link to that nonexistent library in your code.
11/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
This creates a vulnerability for AI-assisted code, called "slopsquatting," whereby an attacker predicts the names of libraries AIs are apt to hallucinate and creates libraries with those names, libraries that do what you would expect they'd do, but also inject malicious code into every program that incorporates them:
theregister.com/2025/04/12/ai_…
This is the hardest type of error to spot, because the AI is guessing the statistically most plausible name for the imaginary library.
12/
LLMs can't stop making up software dependencies and sabotaging everything
Thomas Claburn (The Register)Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
It's like the AI is constructing a spot-the-difference image puzzle on super-hard mode, swapping the fork and knife in a diner's hands from left to right and vice-versa. You couldn't generate a harder-to-spot bug if you tried.
It's not like people are very good at supervising machines to begin with. "Automation blindness" is when you're asked to repeatedly examine the output of a generally correct machine for a long time, and somehow remain vigilant for its errors.
13/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Humans aren't really capable of remaining vigilant for things that don't ever happen - whatever attention and neuronal capacity you initially devote to this never-come eventuality is hijacked by the things that happen all the time. This is why the TSA is so fucking *amazing* at spotting water-bottles on X-rays, but consistently fails to spot the bombs and guns that red team testers smuggle into checkpoints.
14/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
The median TSA screener spots a hundred water bottles a day, and is (statistically) never called upon to spot something genuinely dangerous to a flight. They have put in their 10,000 hours, and then some, on spotting water bottles, and approximately *zero* hours on spotting stuff that we really, really don't want to see on planes.
15/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
So automation blindness is already going to be a problem for any "human in the loop," from a radiologist asked to sign off on an AI's interpretation of your chest X-ray to a low-paid overseas worker remote-monitoring your Waymo...to a programmer doing endless, high-speed code-review for a chatbot.
16/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But that coder has it worse than all the other in-looped humans. That coder doesn't just have to fight automation blindness - they have to fight automation blindness *and* spot the subtlest of errors in this statistically indistinguishable-from-correct code. AI's are basically doing bug steganography, smuggling code defects in by carefully blending them in with correct code.
17/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
At code shops around the world, the reverse centaurs are suffering. A survey of Stack Overflow users found that AI coding tools are creating history's most difficult-to-discharge technology debt in the form of "almost right" code full of these fiendishly subtle bugs:
venturebeat.com/ai/stack-overf…
18/
Stack Overflow data reveals the hidden productivity tax of ‘almost right’ AI code
Sean Michael Kerner (VentureBeat)Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
As Venturebeat reports, while usage of AI coding assistants is up (from 76% last year to 84% this year), trust in these tools is plummeting - 33%, with no bottom in sight. 45% of coders say that debugging AI code takes longer than writing the code without AI at all. Only 29% of coders beleive that AI tools can solve complex code problems.
19/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Venturebeat concludes that there are code shops that "solve the 'almost right' problem" and see real dividends from AI tools. What they don't say is that the coders for whom "almost right" isn't a problem are centaurs, not reverse centaurs. They are in charge of their own production and tooling, and no one is using AI tools as a pretext for a relentless hurry-up amidst swingeing cuts to headcount.
20/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
The AI bubble is driven by the promise of firing workers and replacing them with automation. Investors and AI companies are tacitly (and sometimes explicitly) betting that bosses who can fire a worker and replace them with a chatbot will pay the chatbot's maker an appreciable slice of that former worker's salary for an AI that takes them off the payroll.
21/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
The people who find AI fun or useful or surprising are centaurs. They're making automation choices based on their own assessment of their needs and the AIs' capabilities.
*They are not the customers for AI*. AI exists to replace workers, not empower them. Even if AI can make you more productive, there is no business model in increasing your pay and decreasing your hours.
22/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
AI is about disciplining labor to decrease its share of an AI-using company's profits. AI exists to lower a company's wage-bill, at your expense, with the savings split between the your boss and an AI company.
23/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
When Getty or the *NYT* or another media company sues an AI company for copyright infringement, that doesn't mean they are opposed to using AI to replace creative workers - they just want a larger slice of the creative workers' salaries in the form of a copyright license from the AI company that sells them the worker-displacing tool.
24/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
They'll even tell you so. When the movie studios sued Midjourney, the RIAA (whose most powerful members are subsidiaries of the same companies that own the studios) sent out this press statement, attributed to RIAA CEO Mitch Glazier:
> There is a clear path forward through partnerships that both further AI innovation and foster human artistry. Unfortunately, some bad actors – like Midjourney – see only a zero-sum, winner-take-all game.
25/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Get that? The problem isn't that Midjourney wants to replace all the animation artists - it's that they didn't pay the movie studios license fees for the training data. They didn't create "partnerships."
26/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Incidentally: Mitch Glazier's last job was as a Congressional staffer who slipped an amendment into must-pass bill that killed musicians' ability to claim the rights to their work back after 35 years through "termination of transfer." This was so outrageous that Congress held a special session to reverse it and Glazier lost his job.
Whereupon the RIAA hired him to run the show.
27/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
AI companies are not pitching a future of AI-enabled centaurs. They're colluding with bosses to build a world of AI-shackled reverse centaurs. Some people are using AI tools (often standalone tools derived from open models, running on their own computers) to do some fun and exciting centaur stuff.
28/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
But for the AI companies, these centaurs are a bug, not a feature - and they're the kind of bug that's far easier to spot and crush than the bugs that AI code-bots churn out in volumes no human can catalog, let alone understand.
29/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Hey, German-speakers! Through a very weird set of circumstances, I ended up owning the rights to the German audiobook of my bestselling 2022 cryptocurrency heist technothriller *Red Team Blues* and now I'm selling DRM-free audio and ebooks, along with the paperback (all in German and English) on a Kickstarter that runs until August 11:
kickstarter.com/projects/docto…
30/
Cory Doctorow
in reply to Cory Doctorow • • •Sensitive content
Image:
Cryteria (modified)
commons.wikimedia.org/wiki/Fil…
CC BY 3.0
creativecommons.org/licenses/b…
eof/
File:HAL9000.svg - Wikimedia Commons
commons.wikimedia.orgdivVerent
in reply to Cory Doctorow • • •Sensitive content
Note that asking this way is pretty much biased in favor of AI.
For example, yes, I can confirm that AI tools occasionally help my workflow. I gave up on letting it do anything "interesting", but for stuff like "update all calls to this function to add an extra argument" it works rather well (not 100% though).
Of course... this is only relevant because of a lack of working IDE refactoring tools. Which thanks to AI will also never be made or fixed in the future at all, because "you can just use the AI".
So yes, AI does make me as a developer more productive. But less productive than if I had proper refactoring tools available.
Cory Doctorow reshared this.
Cavyherd
in reply to Cory Doctorow • • •Cory Doctorow reshared this.
Cavyherd
in reply to Cavyherd • • •Cavyherd
in reply to Cory Doctorow • • •Sensitive content
This one is my personal nightmare. I've been in a few situations where I've been required to sustain attention to check for a low-incidence error embedded in a body of work &—I can sustain it for four or five seconds before my attention drifts.
The •only• way I've ever managed that class of task is to come up with hacks that make the whole thing more interesting. If they want my straight attention, it's a pretty much automatic fail.
Cory Doctorow reshared this.