Couldn't resist so I gave Rachael's arguments to ChatGPT and I asked it for a rebuttal. It only seemed fair. 😉 ---------- That’s a strong critique, and it deserves a thoughtful response. Here’s a structured rebuttal that addresses the core concerns without dismissing the legitimate points behind them: --- **1. “LLMs aren’t language, they’re fancy guesswork.”** Yes, LLMs are fundamentally statistical pattern matchers. They don’t "understand" language in the way humans do—they model *form*, not *meaning*, by predicting likely word sequences. But this doesn't disqualify them from producing *language*. Language is, at its core, a structured signal system governed by patterns, and LLMs are very good at modeling those patterns. **Counterpoint:** Language as humans use it has always relied on prediction, pattern, and statistical regularity. Children don’t learn language by studying formal grammars—they internalize usage patterns. LLMs mimic this, albeit in a different (and much shallower) way. If a system produces coherent sentences that people interpret meaningfully, denying that it’s “language” becomes a philosophical stance rather than an empirical one. --- **2. “LLMs ignore linguistic science.”** This is partly true. Many LLM architectures, especially early ones, were developed without deep engagement with linguistic theory. They're engineering-first tools. However, recent research increasingly connects LLM behavior with linguistic phenomena—syntax, semantics, pragmatics, etc. **Counterpoint:** While they didn’t emerge *from* linguistics, LLMs now serve as testbeds *for* linguistic hypotheses. They show surprising emergent behaviors (like understanding long-distance dependencies or discourse structure) that align with linguistic theory—even if not perfectly. That’s not to say they're replacements for theory, but they're tools with real utility. --- **3. “Refining LLMs isn’t progress in language processing.”** If your definition of progress is rooted in theoretical understanding of language, you might be disappointed. But if it’s about building tools that produce useful language outputs across translation, summarization, Q\&A, education, etc., then LLMs represent a giant leap forward. **Counterpoint:** Just because LLMs don’t solve *fundamental* questions in linguistics doesn’t mean they aren’t progress. They’re like early planes: they didn’t explain gravity, but they flew. That’s still remarkable engineering. --- **4. “LLMs lie about being language.”** They don’t lie. People do—by overhyping them or anthropomorphizing them. LLMs are tools. They simulate language use without consciousness or intent. The criticism should be aimed at those misusing them or misrepresenting their capabilities, not the models themselves. --- **In summary:** LLMs aren’t replacements for linguistic theory, but they *do* generate language, as defined by usage and comprehension. They’re flawed, impressive, limited, and powerful—sometimes all at once. Dismissing them entirely risks ignoring the genuinely novel linguistic behavior they exhibit, even if that behavior arises from "fancy guesswork." If you're open to it, this isn't a war between real linguistics and AI—it’s a potential collaboration.
dfyx
in reply to d_dad • • •Let's assume that the Axis winning the war means they keep all territory they've had at the height of their expansion in our timeline but don't expand much more, at least not immediately.
File:World War II in Europe, 1942 (no labels).svg - Wikimedia Commons
commons.wikimedia.orgmatte
in reply to dfyx • • •dfyx
in reply to matte • • •Hard to say. I'm not a historian, so I can only speculate. I would assume that Hitler would eventually select a successor and there is no way of telling how good that person would be at keeping the Reich in order.
As far as I understand it, the fall of the Soviet Union was preceded by at least a decade of economic struggle that was caused by a multitude of factors. Basically the only thing they had to export was oil and weapons and the only nations they could trade with were relatively poor. When their oil production cost kept rising, they just couldn't keep their exports high enough to import enough food and luxury goods to keep their population happy. This was a prime driver for unrest in regions that bordered the west, especially East Germany who of course got news of what life in West Germany was like. The Soviets were eventually forced to open the Berlin Wall and from there, there was nothing they could do to keep people from just leaving and fully collapsing the economy in the process. To this day, 35 years after the reunion, former East Germany is way behind the rest of the country even though on paper they have the same chances as everyone else, just because there has been a massive brain drain.
So overall, the collapse of the Soviet Union was less a failure of communism itself and more a failure to counteract their economic weaknesses as well as a result of their isolationism. The USA didn't win the Cold War because of the inherent superiority of capitalism but because the world drinks Coca Cola, wears jeans, watches Hollywood movies and works with IBM-compatible PCs. If the Soviet Union had pivoted their economy to those kinds of goods and had managed to export them to the west, they might have become what China is today.
So it all comes down to the question if alternate-history Germany manages to do that. With technology advancing slower overall and therefore becoming less of a factor in global markets, and at the same time keeping a lot of top scientists who in the real world left for the other superpowers, they could probably do it.
LandedGentry
in reply to dfyx • • •sdfsafsafsdaf
:::