I once worked at a company that sold industry-specific core-business software to deep-pocketed corps who couldn’t / wouldn’t / shouldn’t roll their own. I got into a discuss with my manager about whether our products were essentially — my words — a hoax.
Me: “Look, our products are riddled with bugs and holes. They’re nearly impossible to deploy, manage, and maintain. They frequently don’t even work •at all• on the putative release date, and we sell the mop-up as expensive ‘consulting.’”
1/
Paul Cantrell
in reply to Paul Cantrell • • •“How can it not be a hoax?!”
He said something that completely changed how I look at the workings of business:
“Paul, you are making the mistake of comparing our software to your ideal of what it •should• be. That’s not what these companies are doing. They’re comparing it to what they already have now. And what they have now is •terrible•.”
2/
Paul Cantrell
in reply to Paul Cantrell • • •He continued: “They’re doing business with Excel spreadsheets, or ancient mainframes, or in many cases still using pen and paper processes [this was the early 00s], and those processes are just wildly labor-intensive and error-ridden. They lose unimaginable amounts of money to this. For them to pay us a measly few million to get software that takes 18 months to get deployed and just barely working? That is a •huge• improvement for them.”
In short: our product sucked, but it wasn’t a hoax.
3/
Paul Cantrell
in reply to Paul Cantrell • • •There’s a weird disconnect about gen AI between the MBA crowd and the tech crowd: either it’s the magical make-money sauce CEOs can just pour on everything, or it’s fake and it’s all a hoax.
A lot of that is just gullibility and hype at play, huge amounts of investor money and wishful thinking desperately hoping to find huge payoffs in whiz-bang tech.
But: companies do actually deploy gen AI, and it sucks, and they •don’t stop•. Why?!
4/
Paul Cantrell
in reply to Paul Cantrell • • •I suspect that conversation long ago might shed some light on how companies are actually viewing gen AI right now. Behind all the flashy “iT cOuLD bE sKYnEt” nonsense, there’s something much more disappointingly cynical but rational: Gen AI sucks. They know it sucks. But in some cases, in some situations, viewed through certain bottom-line lenses, it sucks slightly less.
5/
Paul Cantrell
in reply to Paul Cantrell • • •So Megacorp’s new AI customer support tool describes features that don’t exist, or tells people to eat nails and glue, or is just •wrong•.
Guess what? Their hapless, undertrained, poverty-wage, treated-like-dirt humans who used to handle all the support didn’t actually help people either. Megacorp demanded throughput so high and incentivized ticket closure so much that their support staff were already leading people on wild goose chases, cussing them out, and/or quitting on the spot.
6/
Paul Cantrell
in reply to Paul Cantrell • • •Gen AI doesn’t cuss people out, doesn’t quit on the spot, and has extremely high throughput. It leads people on wild goose chases •far• more efficiently than the humans. And hell, sometimes, just by dumb luck, it’s actually right! Like…maybe more than half the time!
When your previous baseline is the self-made nightmare of late stage capitalism tech support, that is •amazing•.
7/
Paul Cantrell
in reply to Paul Cantrell • • •And you can control it (sort of)! And it protects you from liability (maybe)! And all it takes is money and environmental disaster!
Run that thought process across other activites where corps are deploying gen AI.
I suspect a lot of us, despite living in this modern corporate hellscape, still fail to understand just how profoundly •broken• the operations of big businesses truly are, how much they function on fakery and deception and nonsense.
So gen AI is fake? So what. So is business.
8/
Paul Cantrell
in reply to Paul Cantrell • • •I am hamming this up for cynical dramatic effect, but I do think there’s a serious thought here: so much activity within business delivers so little of actual value to the world that replacing slow human nonsense crap with fast automated nonsense crap seems like a win.
Trying to imagine the world through MBA goggles on, it seems perfectly rational.
When people consider gen AI, I ask them to ask themselves: “Does it matter if it’s wrong?” Often, the answer is “no.”
9/
Paul Cantrell
in reply to Paul Cantrell • • •If you’ll indulge another industry story — sorry, this thread is going to get absurdly long — let me tell you about one of the worst clients I ever had:
Group of brothers. They’d made fuck-you money in marketing or something. They founded a startup with a human benefit angle, do some good for the world, yada yada.
Common now, but new-ish idea at the time: gamified online health & well-being platform that a company (or maybe insurer, whatever) offers to its employees.
10/
Paul Cantrell
in reply to Paul Cantrell • • •The big brilliant idea at the heart of the product they were building? The Life Score: a number that quantifies your overall well-being, a number that you can try to raise by doing healthy activities.
How exactly was this number to be calculated? Eh, details.
11/
Paul Cantrell
in reply to Paul Cantrell • • •They had this elaborate business plan: the market opportunity, the connections, the moving parts — and in the middle of this giant world-domination scheme, a giant hole. Just black box (currently empty) labeled “magic number that makes people get healthier.”
The core feature of their product, the lynchpin that would make the entire thing actually useful, was just a big-ass TBD.
12/
Paul Cantrell
in reply to Paul Cantrell • • •I was hired to implement, but quickly realized they had no idea what they wanted me to build. Worse: they hadn't hired any of the people (like, say, a health actuary or a behavioral psychologist) who would be remotely qualified to help them figure it out. The architect of their giant system was a chemical engineer of some kind who was trying to get into tech. Lots of big ideas about what it would •look like•, but nobody in sight had a clue how this thing would actually •work•. Zero R&D.
13/
Paul Cantrell
in reply to Paul Cantrell • • •No worries. Designers were cranking out UI! Marketers were…marketing! Turning the Life Score from vague founder notion to working system was a troublesome afterthought.
So…like a fool, I tried to help them suss it out. It turned out they •did• sort of have a notion:
1. Intake questionnaire about your lifestyle
2. Assign points to responses
3. System suggests healthy activities
4. Each activity adds points to your score if you do it
14/
Paul Cantrell
in reply to Paul Cantrell • • •And then, like a •damn• fool, I pointed out to them the gaping chasm between (2) and (4). Think about it: at the start, the score measures (however dubiously) the state of your health. But after you do some activities, the score measures how many activities you did.
The score •changes meaning• after intake. And it's designed to go up over time. Even if your health is getting worse.
And like an •utter• damn fool, I thought this was a flaw.
15/
Paul Cantrell
in reply to Paul Cantrell • • •It was only after the whole contract crashed and burned (they were, it turns out, truly awful people) that I realized that my earnest data-conscious questions were threatening their whole model.
Their product was there to make the “healthy” line go up. Not to actually make people healthy, no! Just to make the line go up.
It was an offer of plausible deniability: for users, for their employers, for everyone. We can all •pretend• we’re getting healthier! Folks will pay good money for that.
16/
Paul Cantrell
in reply to Paul Cantrell • • •Of •course• their whole business plan had a gaping hole at the center. That was the point! If that Life Score is •accurate•, if it actually describes the real-world state of a person’s health in any kind of meaningful way, that wrecks the whole thing.
Now, of course, there would be no Paul to ask them annoying questions about the integrity of their metrics. They’d just build it with gen AI.
17/
Paul Cantrell
in reply to Paul Cantrell • • •Would gen AI actually be a good way to help people get healthy with this product? No. But that was never the goal.
Would gen AI have been a good option for these rich people trying to get richer by building a giant hoax box that lets a bunch of parties plausibly claim improved employee health regardless of reality? Hell yes.
18/
Paul Cantrell
in reply to Paul Cantrell • • •Again, my gen AI question: Does it matter if it’s wrong?
I mean, in some situations, yes…right? Like, say, vehicles? that can kill people?
Tesla’s out there selling these self-crashing cars that are •clearly• not ready for prime time, and trap people inside with their unopenable-after-accident doors and burn them alive. And they’re •still• selling crap-tons of those things.
If it doesn’t matter to •them•, how many biz situations are there where “fake and dangerous” is 100% acceptable?
19/
Paul Cantrell
in reply to Paul Cantrell • • •Does it matter if it’s wrong?
In the nihilism of this current stage of capitalism, “no” sure looks like a winning bet.
/end
eberhofer
in reply to Paul Cantrell • • •eberhofer
in reply to eberhofer • • •Cavyherd
in reply to Paul Cantrell • • •This actually sounds like one of the vendors I have to deal with.
They're putting this database system together for us—but show no signs whatever of every having dealt with basic database systems...?
DeManiak 🇿🇦 🐧-More Croutons
in reply to Paul Cantrell • • •...this resonates...
a thought I had the other day:
"genAI is perfectly fine for things that don't really matter,or that you don't really care about "
And to your point - in a big bussines, there is a LOT of stuff that matches that criteria.
Jeff Miller (orange hatband)
in reply to Paul Cantrell • • •Jeff Miller (orange hatband)
in reply to Jeff Miller (orange hatband) • • •There were customers who leaned hard into making the automated response system work really well in ways that helped everybody touching the system, and there were a few really perfect cases: "Your flight has been cancelled, click here to choose what to do about it."
Better than Excel means that the analysts aren't wrestling the data intake and reporting pipeline and can do some analysis and report on it.
William Pietri
in reply to Paul Cantrell • • •Matt Pennig
in reply to Paul Cantrell • • •wasabi brain
in reply to Paul Cantrell • • •Manegiste Flou ⏚ (he/him)
in reply to Paul Cantrell • • •Thomas Lobig🐔🐔
in reply to Paul Cantrell • • •ʕ•ᴥ•ʔ
in reply to Paul Cantrell • • •tom jennings
in reply to Paul Cantrell • • •I think also is this: yeah it's big complex expensive and doesn't work well: but instead of big custom enterprise software with a zillion custom screens/forms and associated databases, from now til when ever all we change is prompt rules, and the ai software itself will alleviate the need for all the custom screens and DB stuff.
Of course it's nonsense, and also like the promise of computing in the 1940s/50s; a big machine! will do the work, we just get some little ladies to write some programs that makes the machine dance, and you're done!
(The history of the very earliest programming is very revealing; it was assumed to be some clerical work, 5 yrs in realized it was actual skilled labor; and programmers went from being mostly women to mostly men in a decade.)
flogallama
in reply to Paul Cantrell • • •Florine
in reply to Paul Cantrell • • •Terrifying thread ☝️
@inthehands
Fish Id Wardrobe
in reply to Paul Cantrell • • •penguin42
in reply to Paul Cantrell • • •George E. 🇺🇸♥🇺🇦🇵🇸🏳️🌈🏳️⚧️
in reply to Paul Cantrell • • •Paul Cantrell
in reply to George E. 🇺🇸♥🇺🇦🇵🇸🏳️🌈🏳️⚧️ • • •No, but one of the companies I worked for was purchased by Oracle after I left, so yes, I was in that general universe.
George E. 🇺🇸♥🇺🇦🇵🇸🏳️🌈🏳️⚧️
in reply to Paul Cantrell • • •Jeremy D. Miller
in reply to Paul Cantrell • • •Exandra
in reply to Paul Cantrell • • •thanks Paul, your posts helped set to rest some confusion I had about gen AI: how can people, knowing it is bad for their own businesses when it inevitably and continually fails and implicates them, still want to incorporate it into their products.
Your thread answers this question!