Hacker Newsnew | past | comments | ask | show | jobs | submit | ragequittah's commentslogin


Usually The thing you've learned after googling for half an hour is mostly that google isn't very useful for search anymore.

This to me seems like saying you can learn nothing from a book unless you yourself have written it. You can read the code the LLM writes the same as you can read the code your colleagues write. Moreover you have to pretty explicitly tell it what to write for it to be very useful. You're still designing what it's doing you just don't have to write every line.

Code review != code design.

Nor reading a book teaches you how to write a book.


"Reading is the creative center of a writer’s life.” — Stephen King, On Writing

You need to design the code in order to tell the LLM how to write it. The LLM can help with this but generally it's better to have a full plan in place to give it beforehand. I've said it before elsewhere but I think this argument will eventually be similar to the people arguing you don't truly know how to code unless you're using assembly language for everything. I mean sure assembly code is better / more efficient in every way but who has the time to bother in a post-compiler world?


I imagine this same argument happening when people stopped using machine code and assembly en masse and started using FORTRAN or COBOL. You don't really know what you're doing unless you're spending the effort I spent!

> "I imagine this same argument happening when people stopped using machine code and assembly en masse and started using FORTRAN or COBOL."

Yeah, certainly. But since this has nothing to do with my argument, which was an answer to the very existential question of a (postulated) non-coder, and not a comment on a forgotten pissing contest between coders, it's utterly irrelevant.

:(


This is quite funny when you created the pissing contest between "coders" and "non-coders" in this thread. Those labels seem very important to you.

I didn't "create" the pissing contest, I merely pointed it out in someone else's drivel.

And of course, these labels are important to me for (precise) language defines the boundaries of my world; coder vs. non-coder, medico vs. quack, writer vs. analphabet, truth vs. lie, etc. Elementary.


I find it quite interesting that you categorize non-coders the same as quacks, analphabets, and lies.

I would never consider myself a coder - though I can and have written quite a lot of code over the years - because it has always been a means to the ends for me. I don't particularly enjoy writing code. Programming isn't a passion. I can and have built working programs without a line of copy and pasted code off stack overflow or using an LLM. Because I needed to to solve a problem.

But there are things I would call myself, things I do and enjoy and am good at. But I wouldn't position people who can't do those things as being the same as a quack.

You also claim to not be the one that started the pissing contest, but you called someone who claims to have written plenty of code themselves a coding-illiterate just because now they'd rather use an LLM than do it themselves. I suppose you could claim they are lying about it, or some no true scottsman type argument, but that seems silly.

You basically took some people talking about their own opinions on what they find enjoyable, and saying that AI-driven coding scratches that itch for them even more than writing code itself does, and then began to be quite hostile towards them with boatloads of denigrating language and derision.


> "I find it quite interesting that you categorize non-coders the same as quacks, analphabets, and lies."

I categorized them not as "the same", but as examples of concept-delineating polar opposites. This as answer to somebody who essentially trotted out the "but they're just labels!1!!" line, which was already considered intellectually lazy before it was turned into a sad meme by people who married their bongs back in the 90s.

> "I would never consider myself a coder - though I can and have written quite a lot of code over the years [...]"

Good for you. A coder, to me, is simply somebody who can produce working programs on their own and has the neccessary occupational (self-) respect. This fans out into several degrees of capabilities, of course.

> "[...] but you called someone who claims to have written plenty of code themselves a coding-illiterate just because now they'd rather use an LLM than do it themselves. "

No. I simply answered this one question:

> “If I’m not the man who can [...] build working programs… WHO AM I?”

Aside from that I reflected on an insulting(ly daft) but extremely common attitude amongst sloperators, especially on parasocial media platforms:

> "As it turns out, writing code isn’t super useful."

Imagine I go to some other SIG to say shit like this: As it turns out, [reading and writing words/playing or operating an instrument or tool/drawing/calculating/...] isn’t "super useful". Suckers!

I'd expect to get properly mocked and then banned.

> "You basically took some people talking about their own opinions on what they find enjoyable, [...]"

Congratulations, you're just the next strawmen salesman. For the last time, bambini: I don't care if this guy uses LLMs and enjoys it... for that was never the focus of my argument at all.


Usually studying a test book is reconceptualizing it in whatever way fits the way you learn. For some people that's notes, for some it's flash cards, for some it's reading the textbook twice and they just get it.

To imagine LLMs have no use case here seems dishonest. If I don't understand a particularly hard part of the subject matter and the textbook doesn't expand on it enough you can tell the LLM to break it down further with sources. I know this works because I've been doing it with Google (slowly, very slowly) for decades. Now it's just way more convenient to get to the ideas you want to learn about and expand them as far as you want to go.


My issue with using LLMs for this use case is that they can be wrong, and when they are, I'm doing the research myself anyway.

The times it's wrong has become vanishingly small. At least for the things I use it for (mostly technical). Chatgpt with extended thinking and feeding it the docs url or a pdf or 3 to start you'll very rarely get an error. Especially when compared to google / stack exchange.

I think this might be a big part of the problem with the conversation about AI right now. The models have become so much better in the last ~6 months in my experience and lots of people wrote them off 1-2 years ago after they couldn't do x and 'we've hit a wall' was being thrown around everywhere.


I just wonder why movies get away with licenses for both music and depicting cars etc. for eternity. Seems like they just added weird unnecessary rules for video games. I also imagine a situation where Stephen King has to renew with Plymouth every few years. Seems ridiculous for any other art form why is it so easily accepted for this one?


There is a small part of me that wonders if my $3000 computer is worth it when that could get me about 12 years of geforce now gaming with an updated graphic card and processor at all times. But I like to tinker so I'll probably end up spending $10k or more by the end of that 12 years instead.


It's like not understanding why with so many books / movies / music albums people choose to read or watch or listen to specific older ones.


My experience with it is the code just wouldn't have existed in the first place otherwise. Nobody was going to pay thousands of dollars for it and it just needs to work and be accurate. It's not the backend code you give root access to on the company server, it's automating the boring aspects of the job with a basic frontend.

I've been able to save people money and time. If someone comes in later and has a more elegant solution for the same $60 effort I spent great! Otherwise I'll continue saving people money and time with my non-perfect code.


That's a fair point.

In banking terms, you are treating AI code as "OPEX" (Operating Expense) rather than "CAPEX" (Capital Expenditure). As long as we treat these $60 quick-fixes as "depreciating assets" (use it and throw it away), it’s great ROI.

My warning was specifically about the danger of mistaking these quick-fixes for "Long-term Capital Assets." As long as you know it's a disposable tool, not a foundation, we are on the same page.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: