> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.
"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?
There's certainly potential but a lot of the market is hot air right now.
> Either way, the market is going to punish them accordingly.
I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.
IMO if the market is going to punish anyone it’s the people who, today, find that AI is able to do all their coding for them.
The skeptics are the ones that have tried AI coding agents and come away unimpressed because it can’t do what they do. If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.
That's asking the wrong question, and I suspect coming from a place of defensiveness, looking to justify one's own existence. "Is there anything I can do that the machine can't?" is the wrong question. "How can I do more with the machine's help?" is the right one.
What's "there" though is that despite being wrappers of chat gpt, the product itself is so compelling that it's essentially got a grip on the entire american economy. That's why everyone's crabs in a bucket about it, there's something real that everyone wants to hitch on to. People compare crypto or NFTs to this in terms of hype cycle, but it's not even close.
>there's something real that everyone wants to hitch on to.
Yeah, stock prices, unregulated consolidation, and a chance to replace the labor market. Next to penis enhancement, it's a CEO's wet dream. They will bet it all for that chance.
Granted, I think its hastiness will lead to a crash, so the CEO's played themselves short term.
> simply because the market has never really punished people for being less efficient at their jobs
In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.
Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).
In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.
This is very true. So you can't just ask people to use AI and expect better output even if AI is all the hype. The bottlenecks are not how many lines of code you can produce in a typical big team/company.
I think this means a lot of big businesses are about to get "disrupted" because small teams can become more efficient because for them sheer generation of somtimes boilerplate low quality code is actually a bottleneck.
Sadly capitalism rewards scarcity at a macro level, which in some ways is the opposite of efficiency. It also grants "social status" to the scarce via more resources. As long as you aren't disrupted, and everyone in your industry does the same/colludes, restricting output and working less usually commands more money up to a certain point (prices are set more as a monopoly in these markets). Its just that scarcity was in the past correlated with difficulty which made it "somewhat fair" -> AI changes that.
Its why unions, associations, professional bodies, etc exist for example. This whole thread is an example -> the value gained from efficiency in SWE jobs doesn't seem to be accruing value to the people with SWE skills.
I think part of this is that there is no one AI and there is no one point in time.
The other day Claude Code correctly debugged an issue for me, that was seen in production, in a large product. It found a bug a human wrote, a human reviewed, and fixed it. For those interested the bug had to do with chunk decoding, the author incorrectly re-initialized the decoder in the loop for every chunk. So single chunk - works. >1 chunk fails.
I was not familiar with the code base. Developers who worked on the code base spent some time and didn't figure out what was going on. They also were not familiar with the specific code. But once Claude pointed this out that became pretty obvious and Claude rewrote the code correctly.
So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.
That said, it does not handle all tasks with the same consistency. Some things it can really mess up. So you need to learn what it does well and what it does less well and how and when to interact with it to get the results you want.
It is automation on steroids with near human (lessay intern) capabilities. It makes mistakes, sometimes stupid ones, but so do humans.
>So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.
If the stories were more like this where AI was an aid (AKA a fancy auto complete), devs would probably be much more optimistic. I'd love more debugging tools.
Unfortunately, the lesson an executive here would see is "wow AI is great! fire those engineers who didn't figure it out". Then it creeps to "okay have AI make a better version of this chunk decoder". Which is wrong on multiple levels. Can you imagine if the result for using Intellisense for the first time was to slas your office in half? I'd hate autocomplete too?
"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?
There's certainly potential but a lot of the market is hot air right now.
> Either way, the market is going to punish them accordingly.
I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.