I intuitively understand AI psychosis to be a real phenomenon. We joke about NPC reddit midwits but what happens when you give them a relatively primitive LLM? They become delusional out of convenience.
I see it in my real life too. "Regular people" completely outsourcing their brains to a hallucinating chat bot. They believe it. They don't check it. They just think it's true. It's crazy. Midwits who just stuck to their lane out of intellectual helplessness will suddenly start doing crazy and dangerous things because ChatGPT told them how, but only enough to be dangerous.
GNU/翠星石
in reply to CrunkLord420 • • •djsumdog
in reply to GNU/翠星石 • • •Exactly. I know the world police have been exasperatingly annoying, but words are important here. The random next word guessing machine cannot lie or hallucinate, because it has no intent.
The biggest danger of LLMs isn't their output, it's peoples' trust. Like this moron who asked Grock, and less than 3 minutes of basic lookups (along with a court case I knew from actual previous research) showed the answer was entirely wrong: djsumdog.com/@djsumdog/posts/A…
Most of my friends do not know what the term "LLM" is. Some have gone to "AI courses" at their schools and don't understand the core mechanics that all these chatbots have in their models is just a massive lists of parts of words, with floating point values to ever other part of a word. That's it. Nothing else. No one is teaching the very basics of what the models are. They literally only get things "right" by accident and people keep asking "Grock is this true" no matter how many times it obviously generates random words that are verifiably untrue.
DJSumDog
djsumdog.comwoodland creature
in reply to djsumdog • • •