Exactly. I'm arguing that what we should be focused on at this relatively early stage is not the amount of output but the rate of innovation.
It's important to note that we're now arguing about the level of quality of something that was a "ha, ha, interesting" in a sidenote by Andrej Karpathy 10 years ago [0], and then became a "ha, ha, useful for weekend projects" in his tweet from a year ago. I'm looking forward to reading what he'll be saying in the next few years.
Because in the beginning of a new technology, the advantages of the technology benefit only the direct users of the technology (the programmers in this case).
However, after a while, the corporations see the benefit and will force their employees into an efficiency battle, until the benefit has shifted mostly away from the employees and towards their bosses.
After this efficiency battle, the benefits will become observable from a macro perspective.
Why is gpt3 relevant? I can't recall anyone using gpt3 directly to generate code. The closest would probably be Tabnine's autocompletion, which I think first used gpt2, but I can't recall any robust generation of full functions (let alone programs) before late 2022 with the original GitHub copilot.