What the author is referring to there as GPT-5, GPT-5.5, and GPT-6 are, respectively, "The models that have a pre-training size 10x greater than, 100x greater than, and 1,000x greater than GPT-4.5." He's aware that what OpenAI is going to actually brand as GPT-5 is the router model that will just choose between which other models to actually use, but regards that as a sign that OpenAI agrees that "the model that is 10x the pre-training size of GPT-4.5" won't be that impressive.
It's slightly confusing terminology, but in fairness there is no agreed upon name for the next three orders of magnitude size-ups of pretraining. In any case, it's not the case that the author is confused about what OpenAI intends to brand GPT-5.
It's slightly confusing terminology, but in fairness there is no agreed upon name for the next three orders of magnitude size-ups of pretraining. In any case, it's not the case that the author is confused about what OpenAI intends to brand GPT-5.