Well it’s not just a “text predictor” is it? You can pretend that today we still only have ChatGPT 2 and that there is only pre-training on a large corpus of information, but that simply isn’t true?
It is, by definition, design and architecture a system that produces believable text.
Here's a task to give it which pulls the veil right off:
Ask it to add tests to a piece of code where code coverage is 100%, but it doesn't actually test functionality 100%. You'll start seeing nonsense sooner or later.
A leading theory in neuroscience is that human brains are fundamentally prediction machines too, constantly predicting sensory input, other people’s behavior, the next word in a sentence. “it’s just prediction” isn’t the gotcha you think it is. Prediction and attention turn out to be a surprisingly powerful foundation for intelligence.
The “just a text predictor” framing was fair a couple years ago but hasn’t kept up. Current models can genuinely identify untested edge cases even when coverage is 100%. You're definitely using the latest and greatest models?
The architecture started as next-token prediction, sure, and yes, human judgment is still required, but that judgment is being captured and integrated too.
Every time millions of people use these models, their feedback feeds the next round of improvements.
Also, these models don’t need to replace your best engineers to be disruptive. They just need to outcompete the bottom of the bell curve. For a lot of junior-level work, we’re already getting close.
> You're definitely using the latest and greatest models?
Claude 4.6 opus high, specifically.
As for human brains: every self respecting neural networks 101 course is prefaced with "don't draw analogues to the human brain". And for good reason. Natural neural networks are fundamentally way more complex at every scale.
Also the brain indeed predicts, but also verifies and learns from the predictions. LLMs don't do that - not in real time at least.