The narrative quietly assumes that this exponential curve can in fact continue since it will be the harbinger of the technological singularity. Seems more than a bit eschatological, but who knows.
If we suppose this tech rapture does happen, all bets are off; in that sense it's probably better to assume the curve is sigmoidal, since the alternative is literally beyond human comprehension.
Barring fully reversible processes as the basis for technology, you still quickly run into energy and cooling constraints. Even with that, you'd have time or energy density constraints. Unlimited exponentials are clearly unphysical.
Yes, this is an accurate description, and also completely irrelevant to the issue at hand.
At the stage of development we are today, no one cares how fast it takes for the exponent to go from eating our galaxy to eating the whole universe, or whether it'll break some energy density constraint before it and leave a gaping zero-point energy hole where our local cluster used to be.
It'll stop eventually. What we care about is whether it stops before it breaks everything for us, here on Earth. And that's not at all a given. Fundamental limits are irrelevant to us - it's like worrying that putting too many socks in a drawer will eventually make them collapse into a black hole. The limits that are relevant to us are much lower, set by technological, social and economic factors. It's much harder to say where those limits lay.
Sure, but it reminds us that we are dealing with an S-curve, so we need to ask where the inflection point is. i.e. what are the relevant constraints, and can they reasonably sustain exponential growth for a while still? At least as an outsider, it's not obvious to me whether we won't e.g. run into bandwidth or efficiency constraints that make scaling to larger models infeasible without reimagining the sorts of processors we're using. Perhaps we'll need to shift to analog computers or something to break through cooling problems, and if the machine cannot find the designs for the new paradigm it needs, it can't make those exponential self-improvements (until it matches its current performance within the new paradigm, it gets no benefit from design improvements it makes).
My experience is that "AI can write programs" is only true for the smallest tasks, and anything slightly nontrivial will leave it incapable of even getting started. It doesn't "often makes mistakes or goes in a wrong direction". I've never seen it go anywhere near the right direction for a nontrivial task.
That doesn't mean it won't have a large impact; as an autocomplete these things can be quite useful today. But when we have a more honest look at what it can do now, it's less obvious that we'll hit some kind of singularity before hitting a constraint.
If we suppose this tech rapture does happen, all bets are off; in that sense it's probably better to assume the curve is sigmoidal, since the alternative is literally beyond human comprehension.