Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.

There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.





Punishment eh? Serves them right for being skeptical.

I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.

In between there was LISP, the AI language (and a couple of others).

I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.

In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.

Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.

The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.


I'm not a big llm booster, but I will say that they're really good for proof of concepts, for turning detailed pseudocode into code, sometimes for getting debugging ideas. I'm a decade younger than you, but I've programmed in 4GLs (yuch), lived through a few attempts at visual programming (ugh), and ... LLM assistance is different. It's not magic and it does really poorly at the things I'm truly expert at, but it does quite well with boring stuff that's still a substantial amount of programming.

And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.


> but it does quite well with boring stuff that's still a substantial amount of programming.

I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.


I've used it a little for openscad with mixed results - sometimes it worked. But I'm a beginner at openscad and suspect if I were better it would have been faster to just code it. It took a lot of English to describe the shape - quite possibly more than it would have taken to just write in openscad. Saying "a cube 3cm wide by 5cm high by 2cm deep" vs cube([5, 3, 2]) ... and as you say, the hard part is before the openscad anyway.

OpenSCAD has a very steep learning curve. The big trick is not to think sequentially but to design the part 'whole'. That requires a mental switch. Instead of building something and then adding a chamfered edge (which is possible, but really tricky if the object is complex enough) you build it out of primitives that you've already chamfered (or beveled). A strategic 'hull' here and there to close the gaps helps a lot.

Another very useful trick is to think in terms of vertices of your object rather than the primitives creates by those vertices. You then put hulls over the vertices and if you use little spheres for the vertices the edges take care of themselves.

This is just about edges and chamfers, but the same kind of thinking applies to most of OpenSCAD. If I compare how productive I am with OpenSCAD vs using a traditional step-by-step UI driven cad tool it is incomparable. It's like exploratory programming, but for physical objects.


> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?

There's certainly potential but a lot of the market is hot air right now.

> Either way, the market is going to punish them accordingly.

I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.


IMO if the market is going to punish anyone it’s the people who, today, find that AI is able to do all their coding for them.

The skeptics are the ones that have tried AI coding agents and come away unimpressed because it can’t do what they do. If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.


> If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.

That's a very interesting observation. I think I'm safe for now ;)


> it can’t do what they do

That's asking the wrong question, and I suspect coming from a place of defensiveness, looking to justify one's own existence. "Is there anything I can do that the machine can't?" is the wrong question. "How can I do more with the machine's help?" is the right one.


What's "there" though is that despite being wrappers of chat gpt, the product itself is so compelling that it's essentially got a grip on the entire american economy. That's why everyone's crabs in a bucket about it, there's something real that everyone wants to hitch on to. People compare crypto or NFTs to this in terms of hype cycle, but it's not even close.

>there's something real that everyone wants to hitch on to.

Yeah, stock prices, unregulated consolidation, and a chance to replace the labor market. Next to penis enhancement, it's a CEO's wet dream. They will bet it all for that chance.

Granted, I think its hastiness will lead to a crash, so the CEO's played themselves short term.


Sure, but under it all there's something of value... that's why it's a much larger hype wave than dick pills

> simply because the market has never really punished people for being less efficient at their jobs

In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.

Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).

In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.


This is very true. So you can't just ask people to use AI and expect better output even if AI is all the hype. The bottlenecks are not how many lines of code you can produce in a typical big team/company.

I think this means a lot of big businesses are about to get "disrupted" because small teams can become more efficient because for them sheer generation of somtimes boilerplate low quality code is actually a bottleneck.


Sadly capitalism rewards scarcity at a macro level, which in some ways is the opposite of efficiency. It also grants "social status" to the scarce via more resources. As long as you aren't disrupted, and everyone in your industry does the same/colludes, restricting output and working less usually commands more money up to a certain point (prices are set more as a monopoly in these markets). Its just that scarcity was in the past correlated with difficulty which made it "somewhat fair" -> AI changes that.

Its why unions, associations, professional bodies, etc exist for example. This whole thread is an example -> the value gained from efficiency in SWE jobs doesn't seem to be accruing value to the people with SWE skills.


I think part of this is that there is no one AI and there is no one point in time.

The other day Claude Code correctly debugged an issue for me, that was seen in production, in a large product. It found a bug a human wrote, a human reviewed, and fixed it. For those interested the bug had to do with chunk decoding, the author incorrectly re-initialized the decoder in the loop for every chunk. So single chunk - works. >1 chunk fails.

I was not familiar with the code base. Developers who worked on the code base spent some time and didn't figure out what was going on. They also were not familiar with the specific code. But once Claude pointed this out that became pretty obvious and Claude rewrote the code correctly.

So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.

That said, it does not handle all tasks with the same consistency. Some things it can really mess up. So you need to learn what it does well and what it does less well and how and when to interact with it to get the results you want.

It is automation on steroids with near human (lessay intern) capabilities. It makes mistakes, sometimes stupid ones, but so do humans.


>So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.

If the stories were more like this where AI was an aid (AKA a fancy auto complete), devs would probably be much more optimistic. I'd love more debugging tools.

Unfortunately, the lesson an executive here would see is "wow AI is great! fire those engineers who didn't figure it out". Then it creeps to "okay have AI make a better version of this chunk decoder". Which is wrong on multiple levels. Can you imagine if the result for using Intellisense for the first time was to slas your office in half? I'd hate autocomplete too?


> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.

And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.


Again, AI isn’t the right tool for every job, but that’s not the same thing as a shallow dismissal.

What you described isn't a shallow dismissal. They tried it, found it to not be useful in solving the problems they face, and moved on. That's what any reasonable professional should do if a tool isn't providing them value. Just because you and they disagree on whether the tool provides value doesn't mean that they are "failing at their job".

It is however much less of a shallow dismissal of a tool than your shallow dismissal of a person, or in fact a large group of persons.

Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.

>To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems

This feels like a mentality of "a solution trying to find a problem". There's enough actual problems to solve that I don't need to create more.

But sure, the extension of this is "Then they go home and research more usages and see a kerfluffle of legal, community, and environmental concerns". Then decides to not get involved in the politics".

>Either way, the market is going to punish them accordingly.

If you want to punish me because I gave evaluations you disagreed with, you're probably not a company I want to work for. I'm not a middle manager.


It really depends on what you’re doing. AI models are great at kind of junior programming tasks. They have very broad but often shallow knowledge - so if your job involves jumping between 18 different tools and languages you don’t know very well, they’re a huge productivity boost. “I don’t write much sql, or much Python. Make a query using sqlalchemy which solves this problem. Here’s our schema …”

AI is terrible at anything it hasn’t seen 1000 times before on GitHub. It’s bad at complex algorithmic work. Ask it to implement an order statistic tree with internal run length encoding and it will barely be able to get off the starting line. And if it does, the code will be so broken that it’s faster to start from scratch. It’s bad at writing rust. ChatGPT just can’t get its head around lifetimes. It can’t deal with really big projects - there’s just not enough context. And its code is always a bit amateurish. I have 10+ years of experience in JS/TS. It writes code like someone with about 6-24 months experience in the language. For anything more complex than a react component, I just wouldn’t ship what it writes.

I use it sometimes. You clearly use it a lot. For some jobs it adds a lot of value. For others it’s worse than useless. If some people think it’s a waste of time for them, it’s possible they haven’t really tried it. It’s also possible their job is a bit different from your job and it doesn’t help them.


> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction

Or, and stay with me on this, it’s a reaction to the actual experience they had.

I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.

Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.


> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries Haskell, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries Emacs, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries FreeBSD, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

We're getting paid to solve the problem, not to play with the shiniest newest tools. If it gets the job done, it gets the job done.


> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

I have solved more problems with tools like sed and awk, you know, actual tools, more than I’ve entered tokens into an LLM.

Nobody seemed to give a fuck as long as the problem was solved.

This it getting out of hand.


Just because you can solve problems with one class of tools doesn’t mean another class is pointless. A whole new class of problems just became solvable.

> A whole new class of problems just became solvable.

This is almost by definition not really true. LLMs spit out whatever they were trained on, mashed up. The solutions they have access to are exactly the ones that already exist, and for the most part those solutions will have existed in droves to have any semblance of utility to the LLM.

If you're referring to "mass code output" as "a new class of problem", we've had code generators of differing input complexity for a very long time; it's hardly new.

So what do you really mean when you say that a new class of problems became solvable?


But sed and awk are problems.

I would've thought that in 20 years you would have met other devs who do not think like you?

something I enjoy about our line of work is there are different ways to be good at it, and different ways to be useful. I really enjoy the way different types of people make a team that knows its strengths and weaknesses.

anyway, I know a few great engineers who shrug at the agents. I think different types of thinker find engagement with these complex tools to be a very different experience. these tools suit some but not all and that's ok


This is the correct viewpoint (in my opinion, of course). There are many ways that lead to a solution, some are better, some are worse, some are faster, some much slower. Different tools and different strokes for different folks and if it works for you then more power to you. That doesn't mean you get to discard everybody for whom it does not work in exactly the same way.

I think a big mistake junior managers make is that they think that their nominal subordinates should solve problems the way that they would solve them, without recognizing that there are multiple valid paths and that it doesn't so much matter which path is chosen as long as the problem is solved on time and within the allocated budget.


I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.

My dismissal I think indicates exhaustion from the additional work I’d need to do to make an LLM write my code, annoyance at its inaccuracies, and disgust at the massive scam and grift that is the LLM influencers.

Writing code via a LLM feels like writing with a wet noodle. It’s much faster and write what I mean, myself, with the terse was and precision of my own thought.


> with the terse was and precision of my own thought

Hehe. So much for precision ;)


Autocorrected “terse-ness”

Autocorrect is my nemesis. And I suspect it has teamed up with email address completion.

I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)

I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)

FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)


> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,

I don't understand why people seem so impatient about AI adoption.

AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.


> I don't understand why people seem so impatient about AI adoption.

We're burning absurd, genuinely farcical amounts of money on these tools now, so of course they're impatient. There's Trillions (with a "T") riding on this massive hypewave, and the VCs and their ilk are getting nervous because they see people are waking up to the reality that it's at best a kinda useful tool in some situations and not the new God that we were promised that can do literally everything ever.


Well that's capital's problem. Don't make it mine!

Well said!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: